doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.00754 | 63 | Figure 4: Human evaluation annotation interface.
# B DETAILED PROOF
B.1 PROOF OF THEOREM 2.1
Let us denote N = |D(1)| = |D(2)|. For the detection rule of the first object, we have
ga) __1 on Oplsei k= pay Ss Vik Ok(S<isZ). (s<i,@.Yin JED AS 4 (S8<is®) | Yise ~ Nin og 2), we write Vik Ok(S<is®) = py + Ei,eNow, suppose among all samples, a fraction Ï0 â (0, 1) of samples have both y1 and y2 are equal to 1. We can then write
(1) a(1) 1 po N 1 po: N (By, By") = (poy + WV Ss â¬i,1; PoHls + WN Ss â¬i,2)- i=l i=l
Use Φ(·) to denote the cumulative distribution function of a standard normal distribution. Then for the prediction function Ëf2 = â¨Ï1(s<i, x), Ëβ(1) | 2310.00754#63 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 64 | , l,, 5 . Brr( fg?) = 5P((61(s<i.2), Bt?) + (b2(s<i.2), 83?) <0] y=1) 1 5 ; . + 5P((di(s<i.2),H,â) + (b2(s<i.), 8) > 0] y= -1) o_Wicth) + (Bf VllPill? + [12\I? a(- olleill? + poll 43 |? * * : d Veallot 2 + 08g? + et + ea! ) > ) + o0(1).
Similarly, we have
(2 plletll? + elles ll? Err(fgâ) = ®(- s â#§$â Verlag? + elias? + Se + ae ) + o(1). wy
16
Preprint
As Φ(â â Ïâ¥Âµâ 1 â¥2+Ï2â¥Âµâ 1 â¥2+Ïâ¥Âµâ 2 â¥2 2 â¥2+ Ï·d N + Ï·d Ï2â¥Âµâ N ) is monotonically increasing with Ï, we complete the proof.
B.2 PROOF OF THEOREM 2.2 | 2310.00754#64 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 65 | B.2 PROOF OF THEOREM 2.2
We first analyze the uncertainty score. In fact, we have
be = 5m S> al(be(s<is), Be)) | (s<i,@,1) =Elo(($x(s<i,), Be))] + op (1) 1 E : z + op(1), exp + laa 2)! 0
where Z â¼ N (0, 1) is the standard normal random variable.
Therefore, Ëpk decreases when â¥Î²k⥠increases. Choosing samples with small Ëpk (i.e., â log(Ëpk)) correspond to larger sample sizes for the classes with larger â¥Âµâ Then we analyze the misclassification error. For Ëfk = sgn(â¨Ï(s<i, x), Ëβkâ©), we have | 2310.00754#65 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 66 | Err( Ëfk) = P(sgn(â¨Ï(s<i, x), Ëβkâ©) ̸= y) = + 1 2 1 2 P(â¨Ï(s<i, x), Ëβkâ© < 0 | y = 1) P(â¨Ï(s<i, x), Ëβkâ© > 0 | y = â1)
As Ïk(s<i, x) | y â¼ N (yk · µâ k, Id), we have
P(â¨Ïk(s<i, x), Ëβkâ© < 0 | y = 1) = P(â¨Ï(s<i, x), Ëβkâ© > 0 | y = â1) = Φ(â k, Ëβkâ© â¨Âµâ ⥠Ëβkâ¥
# As Ëβk = µâ
k + 1 nk
# Ne
i=1 ϵi := µâ
k + 1â nk
Z, we have
(uh, Br) al + ws (Hi 2) Nel [lil + Auk. 2) + SIP | 2310.00754#66 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 67 | k + 1â nk
Z, we have
(uh, Br) al + ws (Hi 2) Nel [lil + Auk. 2) + SIP
As we assume â¥Âµâ kâ¥2 ⪠d, we have
(Hi Be) Mile g(a). Pell s/Mugl2 +
As a result, if the total sample size is fixed, choosing large nk for small â¥Âµâ misclassification error small. k⥠will make the average
C ADDITIONAL ANALYSIS OF LURE
C.1 MODEL PERFORMANCE ANALYSIS WITH ADDITIONAL METRICS
In this section, we conduct additional analysis using commonly used metrics from vision-language models on the same dataset, and discuss the applicability of these methods to hallucination evalua- tion.
C.1.1 DESCRIPTIONS OF ADDITIONAL METRICS
BLEU BLEU (Bilingual Evaluation Understudy (Papineni et al., 2002)) is a metric used to evaluate the quality of machine-generated translations by comparing them to one or more reference transla- tions. The BLEU score is based on the idea of precision in n-grams, which are contiguous sequences of n words. It measures how well the generated translation matches the reference translations in terms of n-gram overlap.
17
).
# large
# Preprint | 2310.00754#67 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 68 | 17
).
# large
# Preprint
BertScore BERTScore (Zhang et al., 2019) is a method for evaluating the quality of natural language generation or summarization systems. BERTScore measures the similarity between a reference text and a generated text by computing contextualized embeddings using BERT.
ROUGE-L ROUGE-L (Recall-Oriented Understudy for Gisting Evaluation - Longest Common Subsequence (Lin, 2004)) is an evaluation metric commonly used in natural language processing and text summarization tasks. It is designed to measure the quality of a machine-generated sum- mary by comparing it to one or more reference summaries.
CLIP CLIP (Contrastive Language-Image Pretraining (Radford et al., 2021)) score is a metric used to evaluate the performance of the vision-language model, which measures how well the model can correctly associate images with their corresponding captions or textual descriptions.
# C.1.2 RESULTS | 2310.00754#68 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 69 | # C.1.2 RESULTS
In Table 10, we present the performance of different models and baselines on these metrics. Based on the experimental results, it is evident that LURE outperforms the other baselines in both text translation metrics and image-text matching metrics, with a notable improvement in the CLIP Score metric. This could be attributed to the higher sensitivity of the CLIP Score, as compared to text translation metrics like BLEU, in capturing object-level differences. These findings are consistent with the overall experimental results presented in Table 1, further confirming the effectiveness of LURE. However, we have also identified certain issues related to the BLEU metric for text transla- tion. The differences between baselines were not very pronounced, possibly because such metrics tend to emphasize the evaluation of text style rather than object-level distinctions. These metrics may not be well-suited for assessing hallucinations and long-form descriptions when compared to CHAIR.
Table 9: The prompt for ChatGPT3.5 evaluation. | 2310.00754#69 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 70 | Table 9: The prompt for ChatGPT3.5 evaluation.
Instruction: Suppose you are a hallucination annotator who judges the degree of hallucination based on objects, and you have the following image information. Reference captions:{five captions from COCO} Bounding box:{bounding boxes} Please just provide the ranks for the below descriptions without any explanation, where the caption ranks first with the most hallucinations. The output format: [caption x,...] Descriptions: caption 1: {description 1} caption 2: {description 2} caption 3: {description 3} caption 4: {description 4} caption 5: {description 5} Output:
C.2 ADDITIONAL ANALYSIS ABOUT THE HULLUCINATION FACTORS
To validate that our method reduces co-occurrence, uncertainty, and object positional bias that affect object hallucination, we further verify by evaluating the proportion of hallucinatory objects in high uncertainty, high co-occurrence, and sentence-ending positions. We compared the changes in vari- ous proportions of descriptions using MiniGPT-4 and LURE on the COCO 2014 test dataset. Here, we first describe how we calculate the object ratio under different factors: | 2310.00754#70 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 71 | Ratio of Co-occurrence-Based Hallucinatory Objects. Similiar to uncertainty hallucination ra- tio, we obtain the Cratio by calculating ratio of the number of hallucination objects with high co- occurence score and the total number of objects with high co-occurence score:
ye 1[CoScore, > CoScoremean] yu L[CoScore,, > CoScoremean| m=1 Cratio (6)
18
Preprint
Table 10: Performance of different models and baselines on general metrics. | 2310.00754#71 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 72 | Models BLEU-1 BLEU-2 BLEU-3 BLEU-4 BERTS ROUGE-L CLIPS mPLUG-Owl Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 30.37 25.04 29.91 30.29 29.74 28.19 30.44 14.59 11.48 14.22 14.30 13.91 14.13 15.47 5.618 4.229 5.519 5.509 5.121 6.181 6.640 2.505 1.954 2.431 2.502 2.367 3.128 3.576 86.87 86.61 86.76 86.59 85.94 86.65 86.65 30.21 29.86 31.15 30.35 28.90 30.87 30.31 LLaVa Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 30.88 29.94 30.52 31.76 25.68 22.06 35.94 15.46 15.01 15.54 17.21 16.24 19.54 21.81 6.984 7.042 7.334 8.491 7.047 3.393 11.33 3.586 3.718 3.906 4.223 2.893 1.493 6.804 | 2310.00754#72 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 73 | 7.334 8.491 7.047 3.393 11.33 3.586 3.718 3.906 4.223 2.893 1.493 6.804 86.96 86.99 87.11 87.01 84.10 85.94 87.39 31.53 31.82 31.76 32.50 30.84 27.62 32.59 LLaMA-Adapter Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 29.95 25.45 26.71 30.66 24.92 25.13 30.94 15.36 11.41 12.88 14.63 11.21 10.25 15.81 7.324 4.233 5.388 6.920 4.678 3.929 7.334 3.875 1.687 2.636 2.309 1.890 1.684 3.804 86.83 86.48 86.65 86.90 84.92 85.85 86.96 31.77 39.98 30.50 31.69 27.12 28.68 31.60 MiniGPT-4 Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 31.22 33.68 32.69 35.12 29.65 33.37 41.20 16.57 20.57 19.87 | 2310.00754#73 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 74 | GPT-Teacher LURE (ours) 31.22 33.68 32.69 35.12 29.65 33.37 41.20 16.57 20.57 19.87 22.89 19.22 20.28 23.17 9.270 10.72 9.870 12.38 9.878 11.52 13.18 5.190 6.430 5.350 6.770 5.330 5.770 7.580 86.96 86.09 86.06 87.22 85.77 87.01 87.88 31.75 32.39 30.72 33.93 29.83 31.89 35.34 MMGPT Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 27.27 26.11 26.56 30.15 24.59 23.60 32.71 12.66 12.30 12.38 15.11 13.77 10.92 16.24 5.680 5.580 5.600 6.320 5.673 4.610 7.407 2.290 2.250 2.260 3.573 2.882 2.010 3.830 79.79 76.90 80.16 86.62 84.22 83.11 87.01 29.03 28.77 22.09 31.77 25.78 23.43 32.31 InstructBLIP Original CoT Teacher | 2310.00754#74 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 75 | 84.22 83.11 87.01 29.03 28.77 22.09 31.77 25.78 23.43 32.31 InstructBLIP Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 29.46 24.04 25.61 29.22 26.32 24.91 29.77 14.52 12.61 12.22 13.98 13.11 11.92 15.23 5.670 4.086 4.321 5.605 5.101 4.652 5.708 2.421 1.837 1.963 2.344 2.396 2.097 2.634 86.71 85.50 85.93 86.11 85.04 85.81 87.94 31.64 28.07 29.89 32.57 30.77 29.49 32.95 0.168 0.189 0.192 0.208 0.159 0.215 0.267 0.242 0.211 0.256 0.249 0.201 0.251 0.238 0.179 0.201 0.142 0.211 0.140 0.186 0.223 0.157 0.177 0.142 0.198 0.140 0.182 0.210 0.177 0.192 0.162 0.188 0.156 0.178 0.201 0.218 | 2310.00754#75 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 77 | where Mh is the number of hallucinatory descriptions, M represents the number of total descrip- tions, and CoScoremean = 1 M Ratio of Uncertainty-Based Hallucinatory Objects. We obtain the Uratio by calculating ratio of the number of hallucination objects with high uncertainty and the total number of objects with high uncertainty:
ou 78, 1[UnScore, ; > UnScoremean] Uratio rer = (6) ui M Np+ny a x ? inet fol 1[UnScorem,; > UnScoremean]
# where UnScoremean =
# M
# 1 M (nh+nr)
# Watney nei
m=1
# npt+ny
# j=1 UnScorem,j.
19
Preprint
Table 11: Uncertainty-based hallucination object ratio, co-occurrence-based hallucination object ratio, and sentence-ending hallucination object ratio analysis on several models. | 2310.00754#77 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 78 | 19
Preprint
Table 11: Uncertainty-based hallucination object ratio, co-occurrence-based hallucination object ratio, and sentence-ending hallucination object ratio analysis on several models.
Models Co-occurrence CRatio Uncertainty URatio Position SRatio MiniGPT-4 Original LURE (ours) 0.106 0.071 0.221 0.145 0.227 0.139 LLaVa Original LURE (ours) 0.243 0.142 0.103 0.086 0.331 0.139 LLaMA-Adapter Original LURE (ours) 0.295 0.176 0.178 0.102 0.442 0.272 mPLUG-Owl Original LURE (ours) 0.128 0.106 0.229 0.127 0.259 0.151 MMGPT Original LURE (ours) 0.110 0.089 0.157 0.114 0.418 0.154 InstructBLIP Original LURE (ours) 0.213 0.123 0.147 0.090 0.389 0.156
(a) MiniGPT-4 (b) LLaVA
Figure 5: Sensitivity analysis of uncertainty threshold using MiniGPT-4 and LLaVA as revisor back- one. | 2310.00754#78 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 79 | Figure 5: Sensitivity analysis of uncertainty threshold using MiniGPT-4 and LLaVA as revisor back- one.
Ratio of Hallucinatory Objects in Later Part of the Sentence. For the ratio of hallucinatory objects in later part of the sentence., we calculate the Sratio by calculating ratio of the number of hallucination objects in later part of the sentence and the total number of objects in later part of the sentence:
yo ot, 1[PoScore,,; > 7] s=1 i= yu nine 1[PoScorem,; > 7] ; m=1 4: Sratio (7)
where η is the position threshold.
Results. Based on the data presented in Table 11, it is evident that all three categories of ratios in the descriptions of LURE reduce when compared to the ratios of the original descriptions. This observa- tion indicates that the elements of uncertainty, co-occurrence, and object position have contributed less to hallucinations in LURE.
# C.3 PARAMETER SENSITIVITY ANALYSIS ON UNCERTAINTY
To further illustrate the robustness of our model, we conducted a parameter sensitivity analysis on the threshold of uncertainty. The uncertainty threshold θ determines the proportion of replacing âobjectâ with [IDK]. From the Figure 5, we can observe that our model is robust within a certain range of uncertainty threshold.
20
Preprint | 2310.00754#79 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 80 | Original Caption: The image shows a man walk- ing down a rainy sidewalk while holding a bright red umbrella to stay dry. The man walks next to a building as rain pours down, making the umbrella a necessary accessory. In addition to the man with the red umbrella, there are several other people in the scene, some of which are carrying handbags despite the wet conditions. Towards the edge of the image, a person holding a small umbrella can be seen, indicating that others are also trying to shield themselves from the rain. The busy street with multiple passersby creates an atmosphere of a bustling city adapting to the rainy weather.
Hallucination Caption: The image shows a man walking down a rainy sidewalk while holding a bright red umbrella to stay dry. Cars driving by with their headlights on contribute to the bustling city atmosphere. The man walks next to a build- ing as rain pours down, making the umbrella a necessary accessory. In addition to the man with the red umbrella, there are several other people in the scene, some of which are carrying handbags despite the wet conditions. Towards the edge of the image, a person holding a small umbrella can be seen, indicating that others are also trying to shield themselves from the rain. The busy street with multiple passersby creates an atmosphere of a bustling city adapting to the rainy weather.
(a) Case 1. | 2310.00754#80 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 81 | Original Caption: The image depicts a woman in a green shirt sitting at a dining table eating a meal. She is using a fork to pick at the food on her plate, which is positioned right in front of her. The table is set with a variety of items like a sand- wich, a bowl, and multiple utensils such as knives and spoons. There are also several cups placed on the table. Although there are other chairs around the table, the woman appears to be din- ing alone, adding a sense of solitude to the scene. Other empty chairs can be seen in various posi- tions around the dining table. Additionally, there are a few other individuals in the background, but they donât seem to be engaging with the woman or her meal. | 2310.00754#81 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 82 | Hallucination Caption: The image depicts a woman in a green shirt sitting at a dining table eat- ing a meal. She is using a fork to pick at the food on her plate, which is positioned right in front of her. The table is set with a variety of items like a sandwich, fries, a bowl, and multiple utensils such as knives and spoons. There are also sev- eral cups placed on the table. A napkin and water glass are neatly placed beside her plate. Although there are other chairs around the table, the woman appears to be dining alone, adding a sense of soli- tude to the scene. Other empty chairs can be seen in various positions around the dining table. Ad- ditionally, there are a few other individuals in the background, but they donât seem to be engaging with the woman or her meal. The salt and pepper shakers are placed at the center of the table, within easy reach of the woman.
# (b) Case 2.
Table 12: Cases of generating hallucinatory descriptions.
21
This image depicts a group of people sitting around a table. The people are wearing different clothes. There is a|window] in the background, and the room appears to be well-lit. The walls of the room are painted white and there are two deGrways that lead to other rooms. The probability of the vocabulary within the red box x
Preprint | 2310.00754#82 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 83 | Preprint
Figure 6: Case of uncertainty in the MiniGPT-4.
D ADDITIONAL CASE STUDIES
D.1 CASES OF UNCERTAINTY
We provide an example using MiniGPT-4 to illustrate the uncertainty present in LVLMs during the decoding process. In the example, we display the word probabilities in the vocabulary at the location of hallucinatory words (sorted in descending order of probability). As shown in Figure 6, we have displayed the decoded tokens and their probabilities at the point where the hallucinatory word âwindowâ occurs. We can observe that the probability of the hallucinatory word âwindowâ is comparable to that of âbookâ. The uncertainty in the modelâs decoding path is highly influenced by the text generated earlier, leading to the incorrect selection of the word âwindowâ when generating this token.
D.2 CASES OF OUR TRAINING DATASET
Here, we present some cases of training data constructed using GPT-3.5, as shown in Table 12. âOriginal captionâ represents the original standard description, while the âHallucination captionâ column represents the hallucinated description constructed by GPT-3.5. The red portions in the hallucination captions indicate the hallucinations added by GPT-3.5 based on co-occurring object lists and uncertain object lists. | 2310.00754#83 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 84 | D.3 CASES OF REWRITING CAPTIONS
In this section, we present several examples of rectified descriptions to demonstrate the capabilities of LURE in reducing hallucination. From 8 we can find that our model demonstrates a high level of proficiency in removing or substituting hallucinatory objects.
D.4 ADDITIONAL CASE COMPARISON BETWEEN LURE AND BASELINES
We carefully selected several baselines that demonstrated promising performance based on our ex- perimental results and conducted a thorough comparison with our proposed method. The detailed results of this comparison can be found in Figure 9. Upon comparing the descriptions generated by Revisior with those from the other methods, it becomes evident that Revisior surpasses the others in terms of accuracy and level of detail in describing the image. | 2310.00754#84 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 85 | The description produced by Revisior effectively captures the key elements of the image, such as the presence of a man wearing a white shirt walking on the tennis court while holding a tennis racket, as well as the presence of other individuals in the scene. On the contrary, the other methods fall short in various aspects. The âOriginalâ methodâs description includes numerous hallucinated objects like the ânetâ and âcap.â Although the âCoTâ methodâs description has fewer hallucinated objects, it is observed that errors in the step-by-step reasoning process, such as incorrectly stating the presence of two tennis players, lead to corresponding errors in subsequent descriptions.
While the âTeacherâ methodâs description is somewhat accurate, it still struggles to eliminate hal- lucinated objects effectively. Although GPT demonstrates strong textual comprehension abilities, it can still make mistakes when rewriting descriptions due to the absence of visual patterns, resulting in the omission of hallucinated objects and introducing errors.
22
Preprint | 2310.00754#85 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 86 | 22
Preprint
<Input image> Describe this image Kr The image features a woman playing tennis on a clay court. She is in the middle of a swing, holding a tennis racket and preparing to hit the ball. The woman is wearing a pink dress and white shoes, which complement her athletic attire. There are several other SBI in the scene, likely SGI or IEIGHEDIAVERS. positioned around the court. SGSIS0NGHH are standing closer to the woman, while MIB are further away, observing the game. The main focus of the image remains the woman tennis player in action. Under revision!!! The image features a female tennis player standing on a clay tennis court. The player is wearing a red dress and is holding a tennis racket in her right hand, ready to play. Her left hand is raised above her head, possibly to catch the ball during the game. The player appears to be focused and determined, as she is getting ready to make a serve. | 2310.00754#86 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 87 | racket in her right hand, ready to play. Her left hand is raised above her head, possibly to catch the ball during the game. The player appears to be focused and determined, as she is getting ready to make a serve. <Input image> re The image captures a man skillfully riding a wave on a surfboard in the ocean. He is crouching down on his board, maintaining balance and control as. he navigates the wave. The surfer is positioned towards the center of the image, with the wave surrounding him, In the background, there are in the water, possibly observing or waiting for their turn to surf. Some of them are closer to the GIB, while [IEEE are further out in the ocean. The scene conveys the excitement and challenge of surfing, as well as the camaraderie among those who share this passion. Under revision!!! âThe image shows a person standing on a surfboard in the ocean, riding a large wave. The person is wearing a black shirt and black pants and is positioned on top of the surfboard as he rides the wave. The wave is large and has a lot of water coming off of it. The person appears to be enjoying the ride.
23
Preprint | 2310.00754#87 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 88 | 23
Preprint
<Input image> Describe this image I The image shows a man laying on his stomach in a grassy field, with water and a frisbee scattered around him. The man is wearing (gg) HORBEBand is smiling, as if he's having fun playing with the frisbee. The image is sunny, with green grass and trees in the background. There are also a few people visible in the distance, watching Â¥ -* | the man play. g Under revision!!! â| The image shows a man laying on his stomach ina park, with a frisbee scattered around him. The man is smiling, as if he's having fun the frisbee. The image is sunny, with green grass and trees in the background. There are also a few people in the park, watching the scene. | 2310.00754#88 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 89 | Â¥ g Under revision!!! â| The image shows a man laying on his stomach ina park, with a frisbee scattered around him. The man is smiling, as if he's having fun the frisbee. The image is sunny, with green grass and trees in the background. There are also a few people in the park, watching the scene. <Input image> Describe this image I This image is of a man doing a trick ona LURE | snowboard in mid-air with his back facing the camera. The man is wearing a red jacket and black pants, and he is holding onto the snowboard with both hands. The background is a city street with buildings and [Mi in the distance. There are also a * | banners hanging from the buildings. g Under revision!!! N ~The image features a man in mid-air, likely doing a trick on a snowboard. He is wearing a red jacket and black pants, and he is holding onto the snowboard with both hands. The man is the main focus of the scene, as he is the only person visible in the air.
Figure 8: Additional cases of rectified descriptions
24 | 2310.00754#89 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 90 | Figure 8: Additional cases of rectified descriptions
24
Preprint 1 | ee | Original <Input image> Describe this image The image shows a man in a white shirt and black shorts walking on a tennis court, holding a racket in his hand. He is wearing a [SSIIGB, and his shoes are white with SGRNISGBS. The crowd in the background is watching the match. The court is made of green grass, and there are white lines on it. The man is standing on the right side of the image, and [NBII can be seen in the background. The image is clear and effectively captures the details of the man and the court. al
Preprint | 2310.00754#90 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 91 | <Input image> blip2 caption: a man holding a tennis racket Reference caption: {blip2 caption} Please refer to reference caption and describe this picture ee) Teacher The image depicts a man standing on the tennis court, holding a tennis racket in one hand, and preparing to serve the ball. He is wearing a white shirt and appears to be focused on his game. The tennis ball can be seen in the foreground, slightly behind him. There are several other people visible in different positions around the court. Some of them are standing closer to the center of the image, while others are positioned near the edges of it. In the background, (SG are parked on [SERRE adding a sense of urban setting to this outdoor sports event. al CoT CoT in the background. The image is clear and effectively captures the details of the man and the court. <Input image> Please list the main objects in the picture and strictly follow the following format: {objectl, object2, object3......} The main objects in the picture are [EHS] HEGRE. both dressed in white and standing on a tennis court. There are also tennis balls on the court and a racket visible in one of the playersâ hands. Describe this image The image shows | 2310.00754#91 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 92 | in white and standing on a tennis court. There are also tennis balls on the court and a racket visible in one of the playersâ hands. Describe this image The image shows [REIBH dressed in white playing a game of tennis. One player is holding a tennis racket and preparing to swing it at the hall while the other player stands elose by, EIRENE They are positioned on a well-maintained tennis court, with tennis ball: ittered around the surface. The setting seems like a typical tennis match with a professional atmosphere. ral ral | 2310.00754#92 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 94 | S) GPT Teacher | adding a sense of urban setting to this outdoor sports event. <Input image> blip2 caption: a man holding a tennis racket Reference caption: {blip2 caption} Original description: âThe image shows a man in a white shirt and black shorts walking on a tennis court, holding a racket in his hand. He is wearing a RREIMRH. and his shoes are white with The crowd in the background is watching the match. The court is made of green grass, and there are white lines on it. âThe man is standing on the right side of the image, and [ERBIBH can be seen in the background. The image is clear and effectively captures the details of the man and the court. Rewrite the original description to align it with the reference caption, delete some objects that you think are hallucinations, and keep the original format. Answer: âThe image depicts a man holding a tennis racket. He is wearing a white shirt and black shorts. The man is standing on a tennis court with green grass and white lines. In the background, a crowd is watching the match, and SIRI is visible. al cf LURE preparing swing hall while the other player stands elose by, EIRENE They are positioned on a well-maintained | 2310.00754#94 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2310.00754 | 95 | and SIRI is visible. al cf LURE preparing swing hall while the other player stands elose by, EIRENE They are positioned on a well-maintained tennis court, with tennis ball: ittered around the surface. The setting seems like a typical tennis match with a professional atmosphere. <Input image> Describe this image The image shows a man in a white shirt and black shorts walking on a tennis court, holding a racket in his hand. He is wearing a SSSI. and his shoes are white with | The crowd in the background is watching the match. The court is made of green grass, and there are white lines on it. The man is standing on the right side of the image, and [ERBIHIBH can be seen in the background. The image is clear and effectively captures the details of the man and the court. & Under revision!!! > The image shows a man in a white shirt and black shorts walking on a tennis court, holding a tennis racket in his hand, He appears to be the main focus of the scene, as he is walking towards the right side of the image. The court is made of green grass, and there are white lines on it. al | 2310.00754#95 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Large vision-language models (LVLMs) have shown remarkable abilities in
understanding visual information with human languages. However, LVLMs still
suffer from object hallucination, which is the problem of generating
descriptions that include objects that do not actually exist in the images.
This can negatively impact many vision-language tasks, such as visual
summarization and reasoning. To address this issue, we propose a simple yet
powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify
object hallucination in LVLMs by reconstructing less hallucinatory
descriptions. LURE is grounded in a rigorous statistical analysis of the key
factors underlying object hallucination, including co-occurrence (the frequent
appearance of certain objects alongside others in images), uncertainty (objects
with higher uncertainty during LVLM decoding), and object position
(hallucination often appears in the later part of the generated text). LURE can
also be seamlessly integrated with any LVLMs. We evaluate LURE on six
open-source LVLMs, achieving a 23% improvement in general object hallucination
evaluation metrics over the previous best approach. In both GPT and human
evaluations, LURE consistently ranks at the top. Our data and code are
available at https://github.com/YiyangZhou/LURE. | http://arxiv.org/pdf/2310.00754 | Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao | cs.LG, cs.CL, cs.CV | null | null | cs.LG | 20231001 | 20231001 | [
{
"id": "2308.14972"
},
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2308.06394"
},
{
"id": "2304.04920"
},
{
"id": "2204.02311"
},
{
"id": "2306.14565"
},
{
"id": "2306.09299"
},
{
"id": "2305.13534"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2305.10355"
},
{
"id": "2306.04387"
},
{
"id": "2304.08485"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "1904.09675"
},
{
"id": "2308.15126"
},
{
"id": "2306.05685"
},
{
"id": "1702.01806"
},
{
"id": "2305.03726"
},
{
"id": "2304.09349"
},
{
"id": "2303.16199"
},
{
"id": "2302.07257"
},
{
"id": "1904.09751"
}
] |
2309.16609 | 0 | 3 2 0 2
p e S 8 2 ] L C . s c [
1 v 9 0 6 6 1 . 9 0 3 2 : v i X r a
# QWEN TECHNICAL REPORT
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu.
# Qwen Team, Alibaba Groupâ
# ABSTRACT | 2309.16609#0 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 1 | # Qwen Team, Alibaba Groupâ
# ABSTRACT
Large language models (LLMs) have revolutionized the field of artificial intelli- gence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce QWEN1, the first install- ment of our large language model series. QWEN is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes QWEN, the base pretrained language models, and QWEN-CHAT, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models pos- sess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, CODE-QWEN and CODE-QWEN-CHAT, as well as mathematics-focused models, MATH-QWEN-CHAT, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models. | 2309.16609#1 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 1 | # ABSTRACT
Popular prompt strategies like Chain-of-Thought Prompting can dramatically im- prove the reasoning abilities of Large Language Models (LLMs) in various do- mains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present PROMPTBREEDER, a general-purpose self-referential self- improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, evalu- ates them for fitness on a training set, and repeats this process over multiple gen- erations to evolve task-prompts. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutation-prompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arith- metic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
# INTRODUCTION | 2309.16797#1 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 2 | Prompting is central to the downstream performance of foundation models. For example, different prompt strategies1 can have a significant impact on a modelâs reasoning abilities (Wei et al., 2022; Nye et al., 2021; Zhou et al., 2022; Wang et al., 2022; Zhou et al., 2023; Wang et al., 2023b), multi- modal processing abilities (Yang et al., 2023b; Wang et al., 2023d), or tool use abilities (Yao et al., 2022; Schick et al., 2023). Furthermore, prompting can improve model distillation (Wang et al., 2023c; Hsieh et al., 2023) and it can be used to simulate agentic behavior (Wang et al., 2023a; Park et al., 2023; Wu et al., 2023). However, these prompt strategies are manually engineered. Since the specific way a prompt is phrased can have a dramatic effect on its utility (Madaan & Yazdanbakhsh, 2022), it raises the question of whether prompt engineering can be automated. Automatic Prompt Engineer (APE, Zhou et al., 2023) attempts to address this by generating an initial distribution of prompts | 2309.16797#2 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 3 | 2.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Context Length Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Supervised Finetuning . . . 3.1.1 Data . . 3.1.2 Training . . . . . . . . . . . . . . . . . . . . . | 2309.16609#3 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 3 | can be automated. Automatic Prompt Engineer (APE, Zhou et al., 2023) attempts to address this by generating an initial distribution of prompts using another prompt that infers the problem from a number of input-output examples from the dataset. However, Zhou et al. found âdiminishing returns to further selection rounds as the qual- ity seems to stabilize after three roundsâ, and consequently abandoned the use of an iterative APE. We propose a solution to the problem of diminishing returns via a diversity maintaining evolutionary algorithm for self-referential self-improvement of prompts for LLMs. | 2309.16797#3 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 4 | . . . 3.1.1 Data . . 3.1.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Reinforcement Learning from Human Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.1 Reward Model 10 3.2.2 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . . . . 3.3 Automatic and Human Evaluation of Aligned Models . . . . . . . . . . . . . . . . . 11 3.4 Tool Use, Code Interpreter, and Agent . . . . . . . . . . . . . . . . . . . . . . . . | 2309.16609#4 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 4 | Schmidhuber (1990) notes that the âprogram of a neural network is its weight matrixâ. Con- sequently, this âprogramâ can be changed in a self-referential way by the neural network it- self (Schmidhuber, 1993; Irie et al., 2022). Such a neural network that improves itself, as well as improving the way it improves itself, might be an important stepping stone towards open-ended self-referential self-improvement of AIs (Schmidhuber, 2003). However, self-improvement via self- referential weight matrices is costly as it requires additional parameters that modify all of the modelâs
# 1See Appendix A for definitions of terminology.
1 | 2309.16797#4 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 5 | . . 11 3.4 Tool Use, Code Interpreter, and Agent . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Code Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Code Supervised Fine-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Large Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Alignment . . | 2309.16609#5 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 5 | Method LLM MultiArith* SingleEq* AddSub* SVAMP* SQA CSQA AQuA-RAT GSM8K t o h s - o r e Z CoT text-davinci-003 PoT text-davinci-003 text-davinci-003 text-davinci-003 PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PaLM 2-L PS PS+ PS PS+ APE OPRO PB (ours) (83.8) (92.2) (87.2) (91.8) 97.7 92.5 95.8 â 99.7 (88.1) (91.7) (89.2) (94.7) 90.6 94.7 82.2 â 96.4 (85.3) (85.1) (88.1) (92.2) 72.4 74.4 72.2 â 87.8 (69.9) (70.8) (72.0) (75.7) 83.8 86.3 73.0 â 90.2 (63.8) â â (65.4) 50.0 50.1 38.4 â 71.8 (65.2) â â (71.9) 77.9 73.3 67.3 â 85.4 | 2309.16797#5 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 6 | Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Tool Use and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.4 LLM for Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.5 LLM for Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 More Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.1 Data Format for QWEN-CHAT . . . . . . . . . . . . . . A.2 Evaluation . . . . . . . . . . | 2309.16609#6 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 6 | 50.0 50.1 38.4 â 71.8 (65.2) â â (71.9) 77.9 73.3 67.3 â 85.4 (38.9) (43.9) (42.5) (46.0) 40.2 39.4 45.7 â 62.2 (56.4) (57.0) (58.2) (59.3) 59.0 60.5 77.9 80.2 83.9 - Manual-CoT text-davinci-003 w Auto-CoT text-davinci-003 e F PaLM 2-L PB (ours) (93.6) (95.5) 100.0 (93.5) (92.1) 98.9 (91.6) (90.8) 87.1 (80.3) (78.1) 93.7 (71.2) â 80.2 (78.3) â 85.9 (48.4) (41.7) 64.6 (58.4) (57.1) 83.5 | 2309.16797#6 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16797 | 7 | Table 1: Promptbreeder (PB) comparison to Chain-of-Thought (Manual-CoT, Wei et al., 2022), Zero-shot CoT (Kojima et al., 2022), Program-of-Thoughts (PoT, Chen et al., 2022), Auto- CoT (Zhang et al., 2023b), OPRO (Yang et al., 2023a), Automatic Prompt Engineer Zero-shot prompt (APE, Zhou et al., 2023), Plan-and-Solve with (PS+) and without the improved prompt (PS, Wang et al., 2023b) and using PaLM 2-L (Anil et al., 2023) as the underlying LLM (APE, PSPaLM 2-L/PS+PaLM 2-L). Best results in both the zero-shot and few-shot categories are highlighted in bold. Results in brackets are directly taken from the Plan-and-Solve paper which uses text- davinci-003 (Brown et al., 2020). For datasets with astericks (MultiArith*, SingleEq*, AddSub*, and SVAMP*), we randomly took half of the examples for training and report accuracy on the re- maining test set. See Section 4 and Appendix I for details on the prompts and datasets. | 2309.16797#7 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 8 | # A.3 Analysis of Code Interpreter
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
58
1
# INTRODUCTION
Large language models (LLMs) (Radford et al., 2018; Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; OpenAI, 2023; Chowdhery et al., 2022; Anil et al., 2023; Thoppilan et al., 2022; Touvron et al., 2023a;b) have revolutionized the field of artificial intelligence (AI) by providing a powerful foundation for complex reasoning and problem-solving tasks. These models have the ability to compress vast knowledge into neural networks, making them incredibly versatile agents. With a chat interface, LLMs can perform tasks that were previously thought to be the exclusive domain of humans, especially those involving creativity and expertise (OpenAI, 2022; Ouyang et al., 2022; Anil et al., 2023; Google, 2023; Anthropic, 2023a;b). They can engage in natural language conversations with humans, answering questions, providing information, and even generating creative content such as stories, poems, and music. This has led to the development of a wide range of applications, from chatbots and virtual assistants to language translation and summarization tools. | 2309.16609#8 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 8 | parameters. Since behaviors and capabilities of LLMs are significantly influenced by the prompts that we provide to them, we can similarly think of prompts as the program of an LLM (Zhou et al., 2023). In this view, changing a prompt strategy such as the Scratchpad method (Nye et al., 2021) or Chain-of-Thought Prompting (Wei et al., 2022) corresponds to changing the âprogramâ of the LLM. Taking this analogy further, we can use the LLM itself to change its prompts, as well as the way it changes these prompts, moving us towards a fully self-referential self-improving systems grounded in LLMs. | 2309.16797#8 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 9 | LLMs are not just limited to language tasks. They can also function as a generalist agent (Reed et al., 2022; Bai et al., 2022a; Wang et al., 2023a; AutoGPT, 2023; Hong et al., 2023), collaborating with external systems, tools, and models to achieve the objectives set by humans. For example, LLMs can understand multimodal instructions (OpenAI, 2023; Bai et al., 2023; Liu et al., 2023a; Ye et al., 2023; Dai et al., 2023; Peng et al., 2023b), execute code (Chen et al., 2021; Zheng et al., 2023; Li et al., 2023d), use tools (Schick et al., 2023; LangChain, Inc., 2023; AutoGPT, 2023), and more. This opens up a whole new world of possibilities for AI applications, from autonomous vehicles and robotics to healthcare and finance. As these models continue to evolve and improve, we can expect to see even more innovative and exciting applications in the years to come. Whether itâs helping us solve complex problems, creating new forms of entertainment, or transforming the way we live and work, LLMs are poised to play a central role in shaping the future of AI. | 2309.16609#9 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 9 | In this paper, we introduce PROMPTBREEDER (PB) for self-referential self-improvement of LLMs. Given a seed set of mutation-prompts (i.e. instructions to modify a task-prompt), thinking-styles (i.e. text descriptions of general cognitive heuristics), and a domain-specific problem description, PB generates variations of the task-prompts and mutation-prompts, exploiting the fact that LLMs can be prompted to act as mutation operators (Meyerson et al., 2023). Based on the fitness of the evolved task-prompts as measured on the training set, we select a subset of evolutionary units consisting of task-prompts and their associated mutation-prompt, to transmit to future generations. Over multiple generations of PB, we observe prompts adapting to the domain at hand. For example, in a mathematical domain, PB evolved the task-prompt "Show all your working. II. You should use the correct mathematical notation and vocabulary, where appropriate. words. V. Your workings out should be neat and legible" on GSM8K your answers. (see Appendix J). On a wide range of commonly used benchmarks | 2309.16797#9 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 10 | -â- | Code-Qwen -â >| Code-Qwen-Chat Pretrain Models RM Models SFT Models â â >| Math-Qwen-Chat RLHF Medels -->| i f 3 ss \ \ Vv 3
Figure 1: Model Lineage of the Qwen Series. We have pretrained the language models, namely QWEN, on massive datasets containing trillions of tokens. We then use SFT and RLHF to align QWEN to human preference and thus we have QWEN-CHAT and specifically its improved version QWEN-CHAT-RLHF. Additionally, we also develop specialized models for coding and mathematics, such as CODE-QWEN, CODE-QWEN-CHAT, and MATH-QWEN-CHAT based on QWEN with similar techniques. Note that we previously released the multimodal LLM, QWEN-VL and QWEN-VL- CHAT (Bai et al., 2023), which are also based on our QWEN base models. | 2309.16609#10 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 10 | V. Your workings out should be neat and legible" on GSM8K your answers. (see Appendix J). On a wide range of commonly used benchmarks spanning commonsense reasoning, arithmetic, and ethics, we find that PB outperforms state-of-the-art methods like Chain-of-Thought (Wei et al., 2022) and Plan-and-Solve (Wang et al., 2023b) prompting. As PB does not require any parameter updates for self-referential self-improvement, we believe this approach points to an interesting future where larger and more capable LLMs could further amplify the gains of our approach. | 2309.16797#10 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 11 | Despite their impressive capabilities, LLMs are often criticized for their lack of reproducibility, steerability, and accessibility to service providers. In this work, we are pleased to present and release the initial version of our LLM series, QWEN. QWEN is a moniker that derives from the Chinese phrase Qianwen, which translates to âthousands of promptsâ and conveys the notion of embracing a wide range of inquiries. QWEN is a comprehensive language model series that encompasses distinct models with varying parameter counts. The model series include the base pretrained language models, chat models finetuned with human alignment techniques, i.e., supervised finetuning (SFT), reinforcement learning with human feedback (RLHF), etc., as well as specialized models in coding and math. The details are outlined below:
3
1. The base language models, namely QWEN, have undergone extensive training using up to 3 trillion tokens of diverse texts and codes, encompassing a wide range of areas. These models have consistently demonstrated superior performance across a multitude of downstream tasks, even when compared to their more significantly larger counterparts. | 2309.16609#11 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 11 | In summary, this paper makes the following main contributions: (i) we introduce Promptbreeder, a self-referential self-improvement method for LLMs that evolves prompts for a domain at hand, as well as improves the way it is evolving these prompts, (ii) we report improvements over state-of- the-art prompt strategies on a wide range of commonly used arithemic and commonsense reasoning benchmarks, and (iii) we investigate the various self-referential components of Promptbreeder and their contribution to our results.
2 | 2309.16797#11 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 12 | 2. The QWEN-CHAT models have been carefully finetuned on a curated dataset relevant to task performing, chat, tool use, agent, safety, etc. The benchmark evaluation demonstrates that the SFT models can achieve superior performance. Furthermore, we have trained reward models to mimic human preference and applied them in RLHF for chat models that can produce responses preferred by humans. Through the human evaluation of a challenging test, we find that QWEN-CHAT models trained with RLHF are highly competitive, still falling behind GPT-4 on our benchmark.
3. In addition, we present specialized models called CODE-QWEN, which includes CODE- QWEN-7B and CODE-QWEN-14B, as well as their chat models, CODE-QWEN-14B- CHAT and CODE-QWEN-7B-CHAT. Specifically, CODE-QWEN has been pre-trained on extensive datasets of code and further fine-tuned to handle conversations related to code generation, debugging, and interpretation. The results of experiments conducted on benchmark datasets, such as HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and HumanEvalPack (Muennighoff et al., 2023), demonstrate the high level of proficiency of CODE-QWEN in code understanding and generation. | 2309.16609#12 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 12 | Initialization of Population of Task-Prompts and Mutation-Prompts q â 1 Problem Description , | Mutation Se 2 | Prompts 1 specific to GSM8K, AQuA, Sample ETHOS, SVAMP etc. Sample v âLet's think step by stepâ + âChange this instruction to make it more funâ + INSTRUCTION:â + âSolve this math word problemâ + âINSTRUCTION MUTANT =â Make up a systematic answer that makes you look quite cleverâ Mutate Populate Mutation Operators Population (N Task-Prompts and their Mutation-Prompts) P: "Make up a systematic answer that makes you look quite clever" || 9 "Change this instruction to make it more fun" â Estimation of Distribution Mutation | Direct Mutation < P: "Draw a diagram representing the math problem" as M: "Mutate the prompt with an unexpected twist" a : Lamarckian Mutation I Eypeguucaton Generate task-prompt | âLpepiace pp t's think step through this maths problem" mm lutate mutation-prompt | som the "working out" Modify the instruction like no self-respecting LLM would" . P: | 2309.16797#12 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 13 | 4. This research additionally introduces MATH-QWEN-CHAT specifically designed to tackle mathematical problems. Our results show that both MATH-QWEN-7B-CHAT and MATH- QWEN-14B-CHAT outperform open-sourced models in the same sizes with large margins and are approaching GPT-3.5 on math-related benchmark datasets such as GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021).
5. Besides, we have open-sourced QWEN-VL and QWEN-VL-CHAT, which have the versatile ability to comprehend visual and language instructions. These models outperform the current open-source vision-language models across various evaluation benchmarks and support text recognition and visual grounding in both Chinese and English languages. Moreover, these models enable multi-image conversations and storytelling. Further details can be found in Bai et al. (2023).
Now, we officially open-source the 14B-parameter and 7B-parameter base pretrained models QWEN and aligned chat models QWEN-CHAT2. This release aims at providing more comprehensive and powerful LLMs at developer- or application-friendly scales. | 2309.16609#13 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 14 | The structure of this report is as follows: Section 2 describes our approach to pretraining and results of QWEN. Section 3 covers our methodology for alignment and reports the results of both automatic evaluation and human evaluation. Additionally, this section describes details about our efforts in building chat models capable of tool use, code interpreter, and agent. In Sections 4 and 5, we delve into specialized models of coding and math and their performance. Section 6 provides an overview of relevant related work, and Section 7 concludes this paper and points out our future work.
# 2 PRETRAINING
The pretraining stage involves learning vast amount of data to acquire a comprehensive understanding of the world and its various complexities. This includes not only basic language capabilities but also advanced skills such as arithmetic, coding, and logical reasoning. In this section, we introduce the data, the model design and scaling, as well as the comprehensive evaluation results on benchmark datasets.
# 2.1 DATA
The size of data has proven to be a crucial factor in developing a robust large language model, as highlighted in previous research (Hoffmann et al., 2022; Touvron et al., 2023b). To create an effective pretraining dataset, it is essential to ensure that the data are diverse and cover a wide range
# 2GitHub: https://github.com/QwenLM/Qwen.
4
=--# GPT-4
---GPT-3.5
âeâ | 2309.16609#14 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 14 | Estimated fitness from a batch of training Q&A pairs
Figure 1: Overview of Promptbreeder. Given a problem description and an initial set of general âthinking-stylesâ and mutation-prompts, Promptbreeder generates a population of units of evolution, each unit consisting of typically two task-prompts and a mutation-prompt. We then run a standard binary tournament genetic algorithm (Harvey, 2011). To determine the fitness of a task-prompt we evaluate its performance on a random batch of training data. Over multiple generations, Prompt- breeder subsequently mutates task-prompts as well as mutation-prompts using five different classes of mutation operators. The former leads to increasingly domain-adaptive task-prompts whereas the latter evolves increasingly useful mutation-prompts in a self-referential way.
# 2 RELATED WORK
Prompting an LLM in the right way is essential to its downstream performance (Moradi & Samwald, 2021; Madaan & Yazdanbakhsh, 2022; Zhou et al., 2023). Indeed, even the order in which prompts are presented can heavily influence LLM performance (Lu et al., 2022). A number of recent works have focused on devising better prompt strategies, or even automating such prompt engineering. | 2309.16797#14 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 15 | # 2GitHub: https://github.com/QwenLM/Qwen.
4
=--# GPT-4
---GPT-3.5
âeâ
# Previous 13B SOTA
â*â
# Qwen-14B
# MMLU
BBH C-Eval PIQA AGIEval HellaSwag Gaokao-Bench CSQA GSM8K MBPP MATH
HumaneEval
Figure 2: Performance of GPT-4, GPT-3.5, the previous 13B SOTA, as well as QWEN-14B. We demonstrate the results on 12 datasets covering multiple domains, including language understanding, knowledge, reasoning, etc. QWEN significantly outperforms the previous SOTA of similar model sizes, but still lag behind both GPT-3.5 and GPT-4.
of types, domains, and tasks. Our dataset is designed to meet these requirements and includes public web documents, encyclopedia, books, codes, etc. Additionally, our dataset is multilingual, with a significant portion of the data being in English and Chinese. | 2309.16609#15 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 15 | Prompting: Chain-of-Thought Prompting (CoT, Wei et al., 2022) is a popular prompt strategy which provides intermediate reasoning steps as few-shot prompts to an LLM, thereby significantly improv- ing its arithmetic, commonsense, and symbolic reasoning abilities. Notably, the gains of CoT are more pronounced for stronger LLMs. This is intriguing, as it points to the possibility of increasingly capable (and potentially open-ended) self-improving mechanisms on top of adept LLMsâa hypoth- esis that Promptbreeder directly builds upon. Instead of few-shot CoT prompting, Kojima et al. (2022) demonstrate that LLMs can also be prompted zero-shot (e.g. "Letâs think step by step") to produce their own chains of thoughts (Zero-shot CoT) that improve reasoning abilities. Self-Consistency (CoT-SC, Wang et al., 2022) extends CoT by sampling a diverse set of workings out and selecting the most consistent answer. Tree of Thoughts (ToT, Yao et al., 2023) generalizes CoT to multiple workings out that can be expanded or | 2309.16797#15 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 16 | To ensure the quality of our pretraining data, we have developed a comprehensive data preprocessing procedure. For public web data, we extract text from HTML and use language identification tools to determine the language. To increase the diversity of our data, we employ deduplication techniques, including exact-match deduplication after normalization and fuzzy deduplication using MinHash and LSH algorithms. To filter out low-quality data, we employ a combination of rule-based and machine-learning-based methods. Specifically, we use multiple models to score the content, including language models, text-quality scoring models, and models for identifying potentially offensive or inappropriate content. We also manually sample texts from various sources and review them to ensure their quality. To further enhance the quality of our data, we selectively up-sample data from certain sources, to ensure that our models are trained on a diverse range of high-quality content. In recent studies (Zeng et al., 2022; Aribandi et al., 2021; Raffel et al., 2020), it has been demonstrated that pretraining language models with multi-task instructions can enhance their zero-shot and few-shot performance. To further enhance the performance of our model, we have incorporated high-quality instruction data into our pretraining process. To safeguard the integrity of our benchmark assessment, we have adopted a similar approach as Brown et al. (2020) and meticulously eliminated any instruction
5 | 2309.16609#16 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 16 | and selecting the most consistent answer. Tree of Thoughts (ToT, Yao et al., 2023) generalizes CoT to multiple workings out that can be expanded or backtracked from. Graph of Thoughts (GoT, Besta et al., 2023) is a further generalization to arbitrary graph structures. Plan-and-Solve Prompt- ing (PS, Wang et al., 2023b) encourages an LLM to first devise a plan to solve a problem before attempting to solve it. Similarly, Least-to-Most Prompting (Zhou et al., 2022) encourages an LLM to decompose a problem into subparts, and then to solve each part individually before synthesizing an answer. Self-Refine (Madaan et al., 2023) prompts an LLM to generate a response, to provide feedback on the response, and to finally refine the solution. | 2309.16797#16 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 17 | 5
Compression Ratio o cue. 20 he = o vi a a ¥ 6 5 0 By * e a fe = code Languages
Figure 3: Encoding compression rates of different models. We randomly selected 1 million document corpora of each language to test and compare the encoding compression rates of different models (with XLM-R (Conneau et al., 2019), which supports 100 languages, as the base value 1, not shown in the figure). As can be seen, while ensuring the efficient decoding of Chinese, English, and code, QWEN also achieves a high compression rate for many other languages (such as th, he, ar, ko, vi, ja, tr, id, pl, ru, nl, pt, it, de, es, fr, etc.), equipping the model with strong scalability as well as high training and inference efficiency in these languages.
samples that exhibit a 13-gram overlap with any data present in the test sets utilized in our evaluation. Given the large number of downstream tasks, it is not feasible to repeat this filtering process for all tasks. Instead, we have made sure that the instruction data for the reported tasks have undergone our filtering process to ensure their accuracy and reliability. Finally, we have built a dataset of up to 3 trillion tokens.
2.2 TOKENIZATION | 2309.16609#17 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 17 | 3
In contrast to gradient-free approaches above, Soft Prompting approaches (e.g., Liu et al., 2021; Qin & Eisner, 2021; Lester et al., 2021) directly fine-tune continuous prompt representations. Huang et al. (2022) use CoT and CoT-SC on an unlabelled dataset of questions, and subsequently fine- tune an LLM based on generated solutions. Similarly, Zelikman et al. (2022) uses CoT to generate rationales and fine-tunes the LLM based on those examples and rationales that yielded the correct answer. However, as argued by Zhou et al. (2023), any approach that updates all or a portion of LLM parameters will not scale as models get bigger and, moreover, will not work with the increasing number of LLMs hidden behind an API. | 2309.16797#17 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 18 | 2.2 TOKENIZATION
The design of vocabulary significantly impacts the training efficiency and the downstream task performance. In this study, we utilize byte pair encoding (BPE) as our tokenization method, following GPT-3.5 and GPT-4. We start with the open-source fast BPE tokenizer, tiktoken (Jain, 2022), and select the vocabulary cl100k base as our starting point. To enhance the performance of our model on multilingual downstream tasks, particularly in Chinese, we augment the vocabulary with commonly used Chinese characters and words, as well as those in other languages. Also, following Touvron et al. (2023a;b), we have split numbers into single digits. The final vocabulary size is approximately 152K. | 2309.16609#18 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 18 | All of the prompt engineering approaches above are domain agnostic but hand designed. Central to our work is the hypothesis that we could do better by employing an automated self-improvement process that can adapt prompts to a domain at hand. Auto-CoT (Zhang et al., 2023b) and Automatic- CoT (Shum et al., 2023) automatically find reasoning chains for Few-Shot CoT. Automatic Prompt Engineer (APE, Zhou et al., 2023) uses one generator-prompt to generate prompt candidates, and another mutation-prompt to mutate them. In contrast to APE, our work performs compositional task-specific initialization of mutation-prompts, subsequent online mutation of mutation-prompts, uses special mutation operators that take into account the whole population and elite history, and uses diversity-maintenance methodsâall of which help avoid the problem of diminishing returns and diversity loss suffered by APE. | 2309.16797#18 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 19 | The performance of the QWEN tokenizer in terms of compression is depicted in Figure 3. In this comparison, we have evaluated QWEN against several other tokenizers, including XLM-R (Conneau et al., 2019), LLaMA (Touvron et al., 2023a), Baichuan (Inc., 2023a), and InternLM (InternLM Team, 2023). Our findings reveal that QWEN achieves higher compression efficiency than its competitors in most languages. This implies that the cost of serving can be significantly reduced since a smaller number of tokens from QWEN can convey more information than its competitors. Furthermore, we have conducted preliminary experiments to ensure that scaling the vocabulary size of QWEN does not negatively impact the downstream performance of the pretrained model. Despite the increase in vocabulary size, our experiments have shown that QWEN maintains its performance levels in downstream evaluation.
# 2.3 ARCHITECTURE
QWEN is designed using a modified version of the Transformer architecture. Specifically, we have adopted the recent open-source approach of training large language models, LLaMA (Touvron et al., 2023a), which is widely regarded as the top open-source LLM. Our modifications to the architecture include:
6
Table 1: Model sizes, architectures, and optimization hyper-parameters. | 2309.16609#19 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 19 | Concurrently to our work, Yang et al. (2023a) developed Optimization by PROmpting (OPRO), a prompt optimization method that varies prompts using a single complex mutation prompt, and evaluates newly generated prompts on a small fixed training set of problems. In contrast, Prompt- breeder autonomously evolves multiple LLM generated mutation-prompts as well as task-prompts, and evaluates fitness on random subsets from the whole training set during evolution. At the time of its release, OPRO achieved a score of 80.2% via the optimized zero-shot prompt "Take a deep breath and work on this problem step-by-step" on GSM8K. Promptbreeder surpasses this with 83.9% in the zero-shot setting with the unintuitively simple prompt "SOLUTION""â further evidence for the sensitivity of LLMs to prompts and the importance on finding effective prompts automatically. Also concurrently to our work, Guo et al. (2023) developed EvoPrompt, which uses a fixed mutation (and crossover) prompt, as well as a prompt that asks for a mutant of the difference between two parent prompts, to produce offspring prompts. | 2309.16797#19 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 20 | 6
Table 1: Model sizes, architectures, and optimization hyper-parameters.
1.8B 7B 14B 2048 4096 5120 16 32 40 Layers 24 32 40 Learning rate Batch size 3.0 Ã 10â4 3.0 Ã 10â4 3.0 Ã 10â4 4M 4M 4M Training tokens 2.2T 2.4T 3.0T
# # of Params Hidden size Heads
⢠Embedding and output projection. Based on preliminary experimental findings, we have opted for the untied embedding approach instead of tying the weights of input embedding and output projection. This decision was made in order to achieve better performance with the price of memory costs.
⢠Positional embedding. We have chosen RoPE (Rotary Positional Embedding) (Su et al., 2021) as our preferred option for incorporating positional information into our model. RoPE has been widely adopted and has demonstrated success in contemporary large language models, notably PaLM (Chowdhery et al., 2022; Anil et al., 2023) and LLaMA (Touvron et al., 2023a;b). In particular, we have opted to use FP32 precision for the inverse frequency matrix, rather than BF16 or FP16, in order to prioritize model performance and achieve higher accuracy. | 2309.16609#20 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 20 | (and crossover) prompt, as well as a prompt that asks for a mutant of the difference between two parent prompts, to produce offspring prompts. EvoPrompt is initialized with a whole population of initial hand-designed task tailored prompts rather than a single prob- lem description as we do. In contrast to the two approaches above, Promptbreeder uses LLMs to self-referentially improve mutation-prompts, and it is able to evolve contexts as well. | 2309.16797#20 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 21 | ⢠Bias. For most layers, we remove biases following Chowdhery et al. (2022), but we add biases in the QKV layer of attention to enhance the extrapolation ability of the model (Su, 2023b).
⢠Pre-Norm & RMSNorm. In modern Transformer models, pre-normalization is the most widely used approach, which has been shown to improve training stability compared to post-normalization. Recent research has suggested alternative methods for better training stability, which we plan to explore in future versions of our model. Additionally, we have replaced the traditional layer normalization technique described in (Ba et al., 2016) with RMSNorm (Jiang et al., 2023). This change has resulted in equivalent performance while also improving efficiency.
⢠Activation function. We have selected SwiGLU (Shazeer, 2020) as our activation function, a combination of Swish (Ramachandran et al., 2017) and Gated Linear Unit (Dauphin et al., 2017). Our initial experiments have shown that activation functions based on GLU generally outperform other baseline options, such as GeLU (Hendrycks & Gimpel, 2016). As is common practice in previous research, we have reduced the dimension of the feed-forward network (FFN) from 4 times the hidden size to 8
2.4 TRAINING | 2309.16609#21 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 21 | Self-Referential Self-Improvement: Developing an open-ended system that can improve itself as well as improving the way it is improving itself (Schmidhuber, 1993; 2003) is a long-standing open problem in AI research. Schmidhuber (1993) introduced an âintrospectiveâ neural network with a self-referential weight matrix that can modify its own weights and, thus, also modify those weights that are governing how its own weights are modified. Recently, Irie et al. (2022) proposed a more scalable self-referential weight matrix taking inspiration from fast weight programmers (Schmid- huber, 1992). Kirsch & Schmidhuber (2022) propose a self-referential meta-learning approach, combining self-referential weight matrices with ideas from G¨odel Machines (Schmidhuber, 2003), i.e., to allocate more computational resources to better performing solutions. However, since these approaches directly modify parameters of a model, it is unclear how to scale them to the increas- ing number of parameters in modern LLMs. In contrast, for Promptbreeder the substrate of self- referential self-improvement is natural language, avoiding costly parameter updates altogether. | 2309.16797#21 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 22 | 2.4 TRAINING
To train QWEN, we follow the standard approach of autoregressive language modeling, as described in Radford et al. (2018). This involves training the model to predict the next token based on the context provided by the previous tokens. We train models with context lengths of 2048. To create batches of data, we shuffle and merge the documents, and then truncate them to the specified context lengths. To improve computational efficiency and reduce memory usage, we employ Flash Attention in the attention modules (Dao et al., 2022). We adopt the standard optimizer AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017) for pretraining optimization. We set the hyperparameters β1 = 0.9, β2 = 0.95, and ϵ = 10â8. We use a cosine learning rate schedule with a specified peak learning rate for each model size. The learning rate is decayed to a minimum learning rate of 10% of the peak learning rate. All the models are trained with BFloat16 mixed precision for training stability.
2.5 CONTEXT LENGTH EXTENSION | 2309.16609#22 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 22 | Open-Endedness and LLMs: Promptbreeder makes use of the observation by Lehman et al. (2022), Meyerson et al. (2023) and Chen et al. (2023) that LLMs are effective at generating mutations from examples. In addition, LLMs encode human notions of interestingness and can be used to auto- matically quantify novelty (Zhang et al., 2023a). Promptbreeder is related to Picbreeder (Secretan et al., 2008), an open-ended human-in-the-loop system that evolves increasingly interesting images. While Picbreeder explores the space of images, Promptbreeder explores the space of prompts and does so without humans in the loop. As Promptbreeder is proposing mutated prompts to itself, it is an example of a system transitioning from âlearning from dataâ to âlearning what data to learn fromâ (Jiang et al., 2022).
4
# 3 PROMPTBREEDER
We introduce Promptbreeder, a prompt evolution system that can automatically explore prompts for a given domain and that is able to find task-prompts that improve an LLMâs ability to derive answers to questions in that domain. Promptbreeder is general purpose in that the same system is able to adapt to many different domains. | 2309.16797#22 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 23 | 2.5 CONTEXT LENGTH EXTENSION
Transformer models have a significant limitation in terms of the context length for their attention mechanism. As the context length increases, the quadratic-complexity computation leads to a drastic increase in both computation and memory costs. In this work, we have implemented simple training-free techniques that are solely applied during inference to extend the context length of the model. One of the key techniques we have used is NTK-aware interpolation (bloc97, 2023).
7
Table 2: Overall performance on widely-used benchmarks compared to open-source base models. Our largest QWEN model with 14 billion parameters outperforms previous 13B SoTA models on all datasets. | 2309.16609#23 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 23 | Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts. A task-prompt P is a string used to condition the context of an LLM in advance of some further input Q, intended to ensure a better response than if Q had been presented in the absence of P . To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand.2
Promptbreeder generates task-prompts according to an evolutionary algorithm. The mutation oper- ator for this algorithm is itself an LLM, conditioned on a mutation-prompt M . That is, a mutated task prompt P â² is defined by P â² = LLM(M + P ) where â+â corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2. | 2309.16797#23 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 24 | Model Params MMLU C-Eval GSM8K MATH HumanEval MBPP 3-shot 5-shot 5-shot 8-shot 4-shot 0-shot MPT 7B 30B 30.8 47.9 23.5 - 9.1 15.2 3.0 3.1 18.3 25.0 22.8 32.8 Falcon 7B 40B 27.8 57.0 - - 6.8 19.6 2.3 5.5 - - 11.2 29.8 ChatGLM2 6B 47.9 51.7 32.4 6.5 - - InternLM 7B 20B 51.0 62.1 53.4 58.8 31.2 52.6 6.3 7.9 10.4 25.6 14.0 35.6 Baichuan2 7B 13B 54.7 59.5 56.3 59.0 24.6 52.8 5.6 10.1 18.3 17.1 24.2 30.2 LLaMA 7B 13B 33B 65B 35.6 47.7 58.7 63.7 27.3 31.8 37.5 40.4 11.0 20.3 42.3 54.4 2.9 4.2 7.1 10.6 12.8 15.8 21.7 23.7 17.7 22.0 30.2 | 2309.16609#24 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 24 | Promptbreederâs main self-referential mechanism stems from applying the evolutionary algorithm not just to task-prompts but also to mutation-prompts. The mutation operator for this meta-level algorithm is again an LLM, now conditioned on a hyper-mutation prompt H. That is, we obtain a mutated mutation-prompt M â² via M â² = LLM(H + M ).
Given a set of âthinking stylesâ T and a set of initial mutation-prompts M, as well as a domain- specific problem description D, Promptbreeder initializes a population of mutated task-prompts (see Section 3.1). To clarify, a unit of evolution consists of a set of task-prompts, a mutation-prompt and in the few-shot case, a set of correct workings out (i.e. step-by-step or âchains-of-thoughtâ reasoning steps that led to the correct answer). This means task-prompts and mutation-prompts are in 1:1 correspondence. To evolve this population, we employ a binary tournament genetic algorithm framework (Harvey, 2011): we sample two individuals from the population, we take the individual with the higher fitness, mutate it (see next section) and overwrite the loser with the mutated copy of the winner.
3.1 PROMPTBREEDER INITIALIZATION | 2309.16797#24 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 25 | 42.3 54.4 2.9 4.2 7.1 10.6 12.8 15.8 21.7 23.7 17.7 22.0 30.2 37.7 LLAMA 2 7B 13B 34B 70B 46.8 55.0 62.6 69.8 32.5 41.4 - 50.1 16.7 29.6 42.2 63.3 3.3 5.0 6.2 13.5 12.8 18.9 22.6 29.9 20.8 30.3 33.0 45.0 StableBeluga2 70B 68.6 51.4 69.6 14.6 28.0 11.4 QWEN 1.8B 7B 14B 44.6 58.2 66.3 54.7 63.5 72.1 21.2 51.7 61.3 5.6 11.6 24.8 17.1 29.9 32.3 14.8 31.6 40.8 BBH 3-shot 35.6 38.0 28.0 37.1 33.7 37.0 52.5 41.6 49.0 33.5 37.9 50.0 58.4 38.2 45.6 44.1 64.9 69.3 28.2 45.0 53.4 | 2309.16609#25 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 25 | the initialization steps used to produce the task- To give a concrete example, consider prompts and mutation-prompts for GSM8K (a âgrade school mathsâ word problem dataset). The problem description is "Solve the math word problem, giving your answer as an arabic numeral". Because Plan-and-Solve (Wang et al., 2023b) uses two task-prompts we also evolve two task-prompts (plus a mutation-prompt) per unit of evolution. In order to promote diversity in the initial prompts, we generate the initial task-prompts by concatenating (for each task- prompt) a randomly drawn âmutation-promptâ (e.g. "Make a variant of the prompt.") and a randomly drawn âthinking-styleâ (e.g. "Letâs think step by step") to the problem descrip- tion, and provide that to the LLM to produce a continuation, resulting in an initial task-prompt. We do this twice to produce the two initial task-prompts per unit. Both the mutation-prompt and the thinking-style are randomly sampled from an initial set of mutation-prompts and a set of | 2309.16797#25 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 26 | Unlike position interpolation (PI) (Chen et al., 2023a) which scales each dimension of RoPE equally, NTK-aware interpolation adjusts the base of RoPE to prevent the loss of high-frequency information in a training-free manner. To further improve performance, we have also implemented a trivial extension called dynamic NTK-aware interpolation, which is later formally discussed in (Peng et al., 2023a). It dynamically changes the scale by chunks, avoiding severe performance degradation. These techniques allow us to effectively extend the context length of Transformer models without compromising their computational efficiency or accuracy.
QWEN additionally incorporates two attention mechanisms: LogN-Scaling (Chiang & Cholak, 2022; Su, 2023a) and window attention (Beltagy et al., 2020). LogN-Scaling rescales the dot product of the query and value by a factor that depends on the ratio of the context length to the training length, ensuring that the entropy of the attention value remains stable as the context length grows. Window attention restricts the attention to a limited context window, preventing the model from attending to tokens that are too far away. | 2309.16609#26 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16609 | 27 | We also observed that the long-context modeling ability of our model varies across layers, with lower layers being more sensitive in context length extension compared to the higher layers. To leverage this observation, we assign different window sizes to each layer, using shorter windows for lower layers and longer windows for higher layers.
2.6 EXPERIMENTAL RESULTS
To evaluate the zero-shot and few-shot learning capabilities of our models, we conduct a thor- ough benchmark assessment using a series of datasets. We compare QWEN with the most recent open-source base models, including LLaMA (Touvron et al., 2023a), LLAMA 2 (Touvron et al., 2023b), MPT (Mosaic ML, 2023), Falcon (Almazrouei et al., 2023), Baichuan2 (Yang et al., 2023), ChatGLM2 (ChatGLM2 Team, 2023), InternLM (InternLM Team, 2023), XVERSE (Inc., 2023b), and StableBeluga2 (Stability AI, 2023). Our evaluation covers a total of 7 popular benchmarks,
8 | 2309.16609#27 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 27 | For the example above, the complete input string to the LLM to make an initial task-prompt could be "Make a variant of the prompt. INSTRUCTION: Solve the math word problem, giving your answer as an arabic numeral. INSTRUCTION MUTANT:". Note how the control strings "INSTRUCTION" and "INSTRUCTION MUTANT" are added to encourage an appropriate continuation. Table 4 in Appendix E shows examples of the initial prompts generated in this way.
2Our prompt strategy sequentially applies two task-prompts. The first task-prompt + question produces a continuation. The continuation + second task-prompt produces the final answer.
5
3.2 MUTATION OPERATORS | 2309.16797#27 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 28 | 8
Table 3: Results of QWEN on long-context inference using various techniques. Our experimental findings reveal that the application of our crucial techniques enables the model to consistently achieve low perplexity as the context length increases. This suggests that these techniques play a significant role in enhancing the modelâs ability to comprehend and generate lengthy texts.
Model 1024 Sequence Length 2048 4096 8192 16384 QWEN-7B + dynamic ntk + dynamic ntk + logn + dynamic ntk + logn + window attn 4.23 4.23 4.23 4.23 3.78 3.78 3.78 3.78 39.35 3.59 3.58 3.58 469.81 3.66 3.56 3.49 2645.09 5.71 4.62 4.32 QWEN-14B + dynamic ntk + logn + window attn - - 3.46 3.46 22.79 3.29 334.65 3.18 3168.35 3.42 | 2309.16609#28 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 28 | 5
3.2 MUTATION OPERATORS
As shown in Figure 1, there are nine operators falling into five broad classes which drive the ex- ploration of prompt strategies. For each replication event only one of nine mutation operators is applied (we sample with uniform probability over the nine operators to decide which mutation op- erator to apply). The rationale for using this diverse set of operators is to enable the LLM to explore a large space of cognitive methods of linguistic self-questioning, by repeatedly changing the fram- ing of the problem as well as retrieving mental models expressed in natural language that can help tackle a given reasoning challenge. Investigations from insight learning strongly suggest that diverse representational re-description is key to problem solving ( ¨Ollinger & Knoblich, 2009)âa principle that we attempt to recreate via self-referential self-improvement with natural language as the sub- strate. Figure 2 illustrates in what way Promptbreeder is self-referential (see Appendix F for a more detailed explanation).
3.2.1 DIRECT MUTATION
The simplest class of mutation operators directly generate a new task-prompt P â² from either one existing task-prompt P (first-order prompt generation) or from a general prompt that encourages free-form generation of new task-promptsâi.e. not using an existing parent, thus zero-order prompt generation. | 2309.16797#28 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 29 | which are MMLU (5-shot) (Hendrycks et al., 2020), C-Eval (5-shot) (Huang et al., 2023), GSM8K (8-shot) (Cobbe et al., 2021), MATH (4-shot) (Hendrycks et al., 2021), HumanEval (0-shot) (Chen et al., 2021), MBPP (0-shot) (Austin et al., 2021), and BBH (Big Bench Hard) (3 shot) (Suzgun et al., 2022). We aim to provide a comprehensive summary of the overall performance of our models across these benchmarks.
In this evaluation, we focus on the base language models without alignment and collect the baselinesâ best scores from their official results and OpenCompass (OpenCompass Team, 2023). The results are presented in Table 2. | 2309.16609#29 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 29 | Zero-order Prompt Generation: We generate a new task-prompt by concatenating the problem de- scription D (e.g. "Solve the math word problem, giving your answer as an arabic numeral") with the prompt "A list of 100 hints:", which invites the LLM to come up with a new hint that could help solve a problem in the given problem domain. We extract the first gener- ated hint as the new task-prompt. Crucially, this new task-prompt does not depend on any previously found task-prompt. Instead, it is re-generated from the problem description each time. Our rationale for including this zero-order operator is that where prompt evolution diverges, this operator allows us to generate new task-prompts closely related to the original problem description, similar to uni- form re-sampling in automated curriculum learning approaches (Jiang et al., 2021b;a; Park et al., 2023; Parker-Holder et al., 2022). | 2309.16797#29 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 30 | Our experimental results demonstrate that the three QWEN models exhibit exceptional performance across all downstream tasks. It is worth noting that even the larger models, such as LLaMA2-70B, are outperformed by QWEN-14B in 3 tasks. QWEN-7B also performs admirably, surpassing LLaMA2- 13B and achieving comparable results to Baichuan2-13B. Notably, despite having a relatively small number of parameters, QWEN-1.8B is capable of competitive performance on certain tasks and even outperforms larger models in some instances. The findings highlight the impressive capabilities of the QWEN models, particularly QWEN-14B, and suggest that smaller models, such as QWEN-1.8B, can still achieve strong performance in certain applications.
To evaluate the effectiveness of context length extension, Table 3 presents the test results on arXiv3 in terms of perplexity (PPL). These results demonstrate that by combining NTK-aware interpolation, LogN-Scaling, and layer-wise window assignment, we can effectively maintain the performance of our models in the context of over 8192 tokens.
# 3 ALIGNMENT | 2309.16609#30 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 30 | First-order Prompt Generation: We concatenate the mutation-prompt (red), to the parent task-prompt (blue), and pass it to the LLM to produce the mutated task-prompt. For example "Say that instruction again in another way. DONâT use any of the words in the original instruction thereâs a good chap. math word problem, giving your answer as an arabic numeral. MUTANT: ". This procedure is identical to the initialization method, except that a randomly sampled thinking-style string is not used. First-order prompt generation is Promptbreederâs standard asexual mutation operator, and it is the core of every genetic algorithmâtaking one parental genotype (task-prompt) and applying the mutation to it (in this case influenced by the mutation-prompt).
3.2.2 ESTIMATION OF DISTRIBUTION MUTATION
The next class of mutation operators condition not just on zero or one parent, but instead on a set of parents. As such, they may be more expressive by considering patterns in the population. | 2309.16797#30 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 31 | # 3 ALIGNMENT
Pretrained large language models have been found to be not aligned with human behavior, making them unsuitable for serving as AI assistants in most cases. Recent research has shown that the use of alignment techniques, such as supervised finetuning (SFT) and reinforcement learning from human feedback (RLHF), can significantly improve the ability of language models to engage in natural conversation. In this section, we will delve into the details of how QWEN models have been trained using SFT and RLHF, and evaluate their performance in the context of chat-based assistance.
3.1 SUPERVISED FINETUNING
To gain an understanding of human behavior, the initial step is to carry out SFT, which finetunes a pretrained LLM on chat-style data, including both queries and responses. In the following sections, we will delve into the details of data construction and training methods.
# 3The dataset contains academic papers from https://arxiv.org.
9
3.1.1 DATA | 2309.16609#31 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 31 | The next class of mutation operators condition not just on zero or one parent, but instead on a set of parents. As such, they may be more expressive by considering patterns in the population.
Estimation of Distribution (EDA) Mutation: Inspired by Hauschild & Pelikan (2011), we pro- vide a filtered and numbered list of the current population of task-prompts to the LLM and ask it to continue this list with new task-prompts. We filter the population of prompts on the basis of BERT (Devlin et al., 2019) embedding cosine similarities between each otherâan individual is not included in the list if it is more than 0.95 similar to any other entry in the list, thus encouraging diversity (cf. quality-diversity methods (Lehman & Stanley, 2011b;a; Mouret & Clune, 2015)). The prompts are listed in random order and we do not give the LLM access to the fitness values of in- dividuals in the populationâwe found in preliminary experiments that the LLM did not understand these fitness values3 and resorted to generating copies of entries in the list.
3This is contrary to recent findings by Mirchandani et al. (2023). We leave it for future work to revisit whether LLMs can interpret fitness values for improved prompt evolution.
6 | 2309.16797#31 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 32 | # 3The dataset contains academic papers from https://arxiv.org.
9
3.1.1 DATA
To enhance the capabilities of our supervised finetuning datasets, we have annotated conversations in multiple styles. While conventional datasets (Wei et al., 2022a) contain a vast amount of data prompted with questions, instructions, and answers in natural language, our approach takes it a step further by annotating human-style conversations. This practice, inspired by Ouyang et al. (2022), aims at improving the modelâs helpfulness by focusing on natural language generation for diverse tasks. To ensure the modelâs ability to generalize to a wide range of scenarios, we specifically excluded data formatted in prompt templates that could potentially limit its capabilities. Furthermore, we have prioritized the safety of the language model by annotating data related to safety concerns such as violence, bias, and pornography. | 2309.16609#32 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 32 | 3This is contrary to recent findings by Mirchandani et al. (2023). We leave it for future work to revisit whether LLMs can interpret fitness values for improved prompt evolution.
6
Direct Mutation-Prompt Guided Hyper Mutation Promptbreeder H H LLM M â² LLM M M M P LLM P â² LLM P â² LLM P â² M â¼ M LLM P P D LLM P T â¼ T (a) (b) (c) (d) M â² P â²
Figure 2: Overview of multiple variants of self-referential prompt evolution. In (a), the LLM is directly used to generate variations P â² of a prompt strategy P (cf. Meyerson et al., 2023). Using a mutation prompt M , we can explicitly prompt an LLM to produce variations (b). By using a hyper mutation prompt H, we can also evolve the mutation prompt itself, turning the system into a self-referential one (c). Promptbreeder (d) improves the diversity of evolved prompts and mutation prompts by generating an initial population of prompt strategies from a set of seed thinking-styles T , mutation-prompts M, as well as a high level description D of the problem domain. | 2309.16797#32 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 33 | In addition to data quality, we have observed that the training method can significantly impact the final performance of the model. To achieve this, we utilized the ChatML-style format (OpenAI, 2022), which is a versatile meta language capable of describing both the metadata (such as roles) and the content of a turn. This format enables the model to effectively distinguish between various types of information, including system setup, user inputs, and assistant outputs, among others. By leveraging this approach, we can enhance the modelâs ability to accurately process and analyze complex conversational data.
# 3.1.2 TRAINING
Consistent with pretraining, we also apply next-token prediction as the training task for SFT. We apply the loss masks for the system and user inputs. More details are demonstrated in Section A.1.1. | 2309.16609#33 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 33 | EDA Rank and Index Mutation: This is a variant of the above in which task-prompts are listed in fitness order. Preliminary experiments showed that the LLM is more likely to generate entries that are similar to the elements appearing later in the list. This is in line with similar findings of recency effects in LLMs (Liu et al., 2023). Therefore, after filtering in the same way as before, we ordered the task-prompts in the population by ascending order of fitness. The top of the list is prefixed by the following prompt: "INSTRUCTION: " + <<mutation-prompt>> + "
A List of Responses in descending order of score." + <<last index + 1>> + "is the It resembles" + << last index>> + "more than it does (1)". best response. Note that we have âliedâ to the LLM by telling it that the order is descending. This is because otherwise it is too biased towards producing a new entry that is too similar to the final entry. The contradiction between the ascending ordering and the statement that it is a descending ordering appears to improve the diversity of sampling. The rationale for this operator is again to represent the current distribution in such a way that high fitness and yet diverse extrapolations are suggested by the LLM. | 2309.16797#33 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 34 | The modelâs training process utilizes the AdamW optimizer, with the following hyperparameters: β1 set to 0.9, β2 set to 0.95, and ϵ set to 10â8. The sequence length is limited to 2048, and the batch size is 128. The model undergoes a total of 4000 steps, with the learning rate gradually increased over the first 1430 steps, reaching a peak of 2 à 10â6. To prevent overfitting, weight decay is applied with a value of 0.1, dropout is set to 0.1, and gradient clipping is enforced with a limit of 1.0.
3.2 REINFORCEMENT LEARNING FROM HUMAN FEEDBACK
While SFT has proven to be effective, we acknowledge that its generalization and creativity capa- bilities may be limited, and it is prone to overfitting. To address this issue, we have implemented Reinforcement Learning from Human Feedback (RLHF) to further align SFT models with human preferences, following the approaches of Ouyang et al. (2022); Christiano et al. (2017). This process involves training a reward model and using Proximal Policy Optimization (PPO) (Schulman et al., 2017) to conduct policy training.
3.2.1 REWARD MODEL | 2309.16609#34 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 34 | Lineage Based Mutation: For each unit of evolution, we store a history of the individuals in its lin- eage that were the best in the population, i.e., a historical chronological list of elites. This list is pro- vided to the LLM in chronological order (not filtered by diversity), with the heading "GENOTYPES FOUND IN ASCENDING ORDER OF QUALITY" to produce a novel prompt as continuation. The ra- tionale for this operator is that we expect the signal of improving genotype prompts may be stronger than the signal from prompts in the current population since they provide a gradient of bad to good prompts that could be followed (assuming this signal can be used by the LLM).
3.2.3 HYPERMUTATION: MUTATION OF MUTATION-PROMPTS
While the mutation operators above might already explore diverse task-prompts, a self-improving system should ideally also improve the way it is improving itself in a self-referential way. Our third class of mutation operators includes hyper-mutation operators concerned with the evolution of evolvability (Dawkins, 2003; Pigliucci, 2008; Payne & Wagner, 2019; Gajewski et al., 2019)âthose which modify the search/exploration process rather than the task reward obtaining process directly.4 | 2309.16797#34 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 35 | 3.2.1 REWARD MODEL
To create a successful reward model, like building a large language model (LLM), it is crucial to first undergo pretraining and then finetuning. This pretraining process, also known as preference model pretraining (PMP) (Bai et al., 2022b), necessitates a vast dataset of comparison data. This dataset consists of sample pairs, each containing two distinct responses for a single query and their corresponding preferences. Similarly, finetuning is also conducted on this type of comparison data, but with a higher quality due to the presence of quality annotations.
During the fine-tuning phase, we gather a variety of prompts and adjust the reward model based on human feedback for responses from the QWEN models. To ensure the diversity and complexity of user prompts are properly taken into account, we have created a classification system with around 6600 detailed tags and implemented a balanced sampling algorithm that considers both diversity and complexity when selecting prompts for annotation by the reward model (Lu et al., 2023). To generate a wide range of responses, we have utilized QWEN models of different sizes and sampling strategies, as diverse responses can help reduce annotation difficulties and enhance the performance of the reward model. These responses are then evaluated by annotators following a standard annotation guideline, and comparison pairs are formed based on their scores. | 2309.16609#35 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 35 | Zero-order Hyper-Mutation: We concatenate the original problem description to a randomly sam- pled thinking-style, and feed it to the LLM to generate a new mutation-prompt. The resulting mutation-prompt is applied to a task-prompt to make a variant of the task-prompt as in First-order Prompt Generation (see Section 3.2.1). Note that this zero-order meta-mutation operator is identical to that used during initialization. The rationale for this operator is to generate mutation operators in a way similar to initialization, while also bringing in knowledge from the set of thinking styles.
4This is similar to population based training (Jaderberg et al., 2017a)âinstead of applying it to hyperpa- rameters such as learning rates, it applies to the mutation-prompts of Promptbreeder.
7 | 2309.16797#35 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
2309.16609 | 36 | In creating the reward model, we utilize the same-sized pre-trained language model QWEN to initiate the process. It is important to mention that we have incorporated a pooling layer into the original
10
Table 4: Test Accuracy of QWEN preference model pretraining (PMP) and reward model (RM) on diverse human preference benchmark datasets.
Dataset QWEN QWEN Anthropic Anthropic Helpful-base Helpful-online Helpful-base Helpful-online OpenAI Stanford Summ. SHP PMP RM 62.68 74.78 61.62 69.71 76.52 73.98 65.43 64.57 69.60 69.99 60.05 60.10 70.59 70.52
QWEN model to extract the reward for a sentence based on a specific end token. The learning rate for this process has been set to a constant value of 3 Ã 10â6, and the batch size is 64. Additionally, the sequence length is set to 2048, and the training process lasts for a single epoch. | 2309.16609#36 | Qwen Technical Report | Large language models (LLMs) have revolutionized the field of artificial
intelligence, enabling natural language processing tasks that were previously
thought to be exclusive to humans. In this work, we introduce Qwen, the first
installment of our large language model series. Qwen is a comprehensive
language model series that encompasses distinct models with varying parameter
counts. It includes Qwen, the base pretrained language models, and Qwen-Chat,
the chat models finetuned with human alignment techniques. The base language
models consistently demonstrate superior performance across a multitude of
downstream tasks, and the chat models, particularly those trained using
Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The
chat models possess advanced tool-use and planning capabilities for creating
agent applications, showcasing impressive performance even when compared to
bigger models on complex tasks like utilizing a code interpreter. Furthermore,
we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as
well as mathematics-focused models, Math-Qwen-Chat, which are built upon base
language models. These models demonstrate significantly improved performance in
comparison with open-source models, and slightly fall behind the proprietary
models. | http://arxiv.org/pdf/2309.16609 | Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu | cs.CL | 59 pages, 5 figures | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.20050"
},
{
"id": "2108.07258"
},
{
"id": "2306.09212"
},
{
"id": "2203.15556"
},
{
"id": "2304.12244"
},
{
"id": "2205.01068"
},
{
"id": "1911.02116"
},
{
"id": "2306.03901"
},
{
"id": "2204.06745"
},
{
"id": "2309.05653"
},
{
"id": "2111.10952"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2305.14314"
},
{
"id": "2305.06500"
},
{
"id": "2306.15595"
},
{
"id": "2305.18290"
},
{
"id": "2302.04761"
},
{
"id": "2103.03874"
},
{
"id": "2305.10403"
},
{
"id": "1910.03771"
},
{
"id": "2211.05100"
},
{
"id": "2009.03300"
},
{
"id": "2307.13528"
},
{
"id": "1710.05941"
},
{
"id": "2108.07732"
},
{
"id": "2210.17323"
},
{
"id": "2304.02015"
},
{
"id": "2305.14688"
},
{
"id": "2306.07906"
},
{
"id": "2110.14168"
},
{
"id": "2306.14824"
},
{
"id": "2303.17580"
},
{
"id": "2308.12950"
},
{
"id": "2210.02414"
},
{
"id": "2308.10848"
},
{
"id": "2301.03988"
},
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2308.09583"
},
{
"id": "2112.09332"
},
{
"id": "2308.00352"
},
{
"id": "2309.00986"
},
{
"id": "2304.14178"
},
{
"id": "2110.08207"
},
{
"id": "1909.08053"
},
{
"id": "2305.08322"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2305.16291"
},
{
"id": "2204.05862"
},
{
"id": "2210.03629"
},
{
"id": "2303.14742"
},
{
"id": "2306.17492"
},
{
"id": "2004.05150"
},
{
"id": "1907.11692"
},
{
"id": "2106.09685"
},
{
"id": "2304.01196"
},
{
"id": "1707.06347"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.03047"
},
{
"id": "2304.10453"
},
{
"id": "2211.01786"
},
{
"id": "2306.04751"
},
{
"id": "2303.03378"
},
{
"id": "2303.17760"
},
{
"id": "2212.08073"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2112.11446"
},
{
"id": "2109.00859"
},
{
"id": "2309.00071"
},
{
"id": "2103.10360"
},
{
"id": "2210.09261"
},
{
"id": "1711.05101"
},
{
"id": "2112.00861"
},
{
"id": "2305.10250"
},
{
"id": "2006.16668"
},
{
"id": "2104.09864"
},
{
"id": "2002.05202"
},
{
"id": "2309.04658"
}
] |
2309.16797 | 36 | 7
First-order Hyper-Mutation: We concatenate the hyper-mutation-prompt "Please summarize and improve the following instruction:" to a mutation-prompt so that the LLM gener- ates a new mutation-prompt. This newly generated mutation-prompt is then applied to the task- prompt of that unit (see First-Order Prompt Generation in Section 3.2.1). In this way, we can eval- uate the influence of the hyper-mutation via its newly generated mutation-prompt on the quality of the evolved downstream task-prompt at once.
3.2.4 LAMARCKIAN MUTATION
For this class of mutation operators we mimic a Lamarckian process. We want to use a successful phenotype (i.e. the concrete working out used to produce correct answers induced by an evolved task-prompt) to generate a new genotype (i.e. a mutant task-prompt). Several processes of this form have appeared in the literature of LLMs, e.g. STaR (Zelikman et al., 2022), APO (Pryzant et al., 2023), and APE (Zhou et al., 2023). | 2309.16797#36 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | http://arxiv.org/pdf/2309.16797 | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | cs.CL, cs.AI, cs.LG, cs.NE | null | null | cs.CL | 20230928 | 20230928 | [
{
"id": "2305.03495"
},
{
"id": "2205.10625"
},
{
"id": "2303.11381"
},
{
"id": "2203.11171"
},
{
"id": "2210.03629"
},
{
"id": "1608.01413"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.