doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.01219
88
Multi-lingual hallucination. Existing work in LLM hallucination primarily focuses on English, despite the existence of thousands of languages in the world. We hope that LLMs can possess the ability to handle various languages uniformly. Some previous studies have investigated the per- formance of LLMs on some multi-lingual bench- marks (Ahuja et al., 2023; Lai et al., 2023), and collectively found that their performance degen- erates when generalizing to non-Latin languages. In terms of the hallucination problem, Guerreiro et al. (2023a) observe that multi-lingual LLMs predominantly struggle with hallucinations in low- resource languages in the translation task. Po- tential follow-up work could include systemati- cally measuring and analyzing LLM hallucina- tions across a wide variety of languages. As shown in Table 11, we find that LLMs such as ChatGPT provide accurate answers in English but expose hallucinations in other languages, leading to mul- tilingual inconsistencies. The transfer of knowl- edge within LLMs from high-resource languages to low-resource ones also presents an interesting and promising research direction.
2309.01219#88
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
89
Multi-modal hallucination. In an effort to im- prove the performance of complex multi-modal tasks, recent studies have proposed replacing the text encoder of existing vision-large models with LLMs, resulting in large vision-language mod- els (LVLMs) (Liu et al., 2023b; Ye et al., 2023). Despite their success, some research reveals that LVLMs inherit the hallucination problem from LLMs and exhibit more severe multi-modal hal19 2. Is there a person under the tree? Yes, there is a person under the tree. Figure 7: An example of object hallucination in LVLMs. We highlight the hallucination in red, as there is no person under the tree in this picture.
2309.01219#89
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
90
Figure 7: An example of object hallucination in LVLMs. We highlight the hallucination in red, as there is no person under the tree in this picture. lucinations compared to smaller models. For in- stance, Li et al. (2023e) discuss the object halluci- nation of LVLMs, wherein LVLMs generate con- tent containing objects that are inconsistent with or absent from the input image, such as the ex- ample in Figure 7. To effectively measure ob- ject hallucinations generated by LVLMs, Liu et al. (2023a) propose a GPT4-Assisted Visual Instruc- tion Evaluation (GAVIE) benchmark. Gunjal et al. (2023) introduce a multi-modal hallucination de- tection dataset named M-HalDetect, further study the unfaithful descriptions and inaccurate rela- tionships beyond object hallucinations in LVLMs. Furthermore, in addition to images, some stud- ies have extended LLMs to other modalities such as audio (Wu et al., 2023a; Su et al., 2023) and video (Maaz et al., 2023), making it interesting to investigate hallucination in these new scenarios.
2309.01219#90
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
91
Model editing. As elaborated in § 4, hallucina- tions in LLMs may primarily stem from the mem- orization of false information or the absence of correct factual knowledge. To mitigate these is- sues in LLMs with minimal computational over- head, the concept of model editing has been in- troduced (Sinitsin et al., 2020; De Cao et al., 2021). This approach involves modifying the be- havior of models in a manner that is both data- and computation-efficient. At present, there are two mainstream paradigms for model editing. The first involves the incorporation of an auxiliary sub-network (Mitchell et al., 2022; Huang et al., 20 2023b), while the second entails direct modifi- cation of the original model parameters (Meng et al., 2022a,b). This technique may be instrumen- tal in eliminating LLMs’ hallucinations by editing their stored factual knowledge in purpose (Lan- ham et al., 2023; Onoe et al., 2023). How- ever, this emerging field still faces numerous chal- lenges. These could include editing black-box LLMs (Murty et al., 2022), in-context model edit- ing (Zheng et al., 2023a), and multi-hop model editing (Zhong et al., 2023), etc.
2309.01219#91
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
92
Attack/defense for inducing hallucination. As previously discussed, significant efforts have been undertaken by both researchers and companies to guarantee that LLMs produce truthful responses, ultimately improving the overall user experi- ence. Cutting-edge commercial LLMs, such as GPT4 (OpenAI, 2023b), appear to have acquired a decent ability to generate proper responses to factuality-related queries. However, they are not invincible. Several studies show that LLMs can be manipulated using techniques like meticulously crafted jailbreak prompts to elicit arbitrary desired responses (Wei et al., 2023a; Zou et al., 2023), in- cluding hallucinations. Consequently, the attack- ing and defending strategies for inducing halluci- nations could also be a promising research direc- tion. This is particularly important as the gener- ation of fabricated information could potentially breach relevant laws, leading to the forced shut- down of LLM applications. This direction is also intimately tied to the robustness of existing hallu- cination mitigation methods.
2309.01219#92
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
93
Others. Given that the current research on hal- lucinations in LLMs is still in its early stages, there are also many other intriguing and promis- For in- ing avenues for further investigation. stance, researchers have begun to treat LLMs as agents for open-world planning in the pursuit of AGI (Park et al., 2023; Wang et al., 2023a). Ad- dressing the hallucination problem within the con- text of LLMs-as-agents presents brand-new chal- lenges and holds considerable practical value. Be- sides, analyzing and tracing LLM hallucinations from the linguistic aspect is another interesting re- search topic. Rawte et al. (2023) show that the oc- currence of LLM hallucination is closely related to linguistic nuances of the user prompts, such as readability, formality, and concreteness. We believe all these directions merit thorough exploration in future research. # 7 Conclusion
2309.01219#93
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
94
# 7 Conclusion With their strong understanding and generation ca- pabilities in the open domain, LLMs have gar- nered significant attention from both academic and industrial communities. However, hallucination remains a critical challenge that impedes the prac- tical application of LLMs. In this survey, we of- fer a comprehensive review of the most recent ad- vances, primarily post the release of ChatGPT, that aim to evaluate, trace, and eliminate hallucinations within LLMs. We also delve into the existing chal- lenges and discuss potential future directions. We aspire for this survey to serve as a valuable re- source for researchers intrigued by the mystery of LLM hallucinations, thereby fostering the practi- cal application of LLMs. # Acknowledgments We would like to thank Yu Wu and Yang Liu for their valuable suggestions. # References Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, and Siva Evaluating correctness and Reddy. 2023. instruction-following mod- faithfulness of arXiv preprint els for question answering. arXiv:2307.16877. Ayush Agrawal, Lester Mackey, and Adam Tau- man Kalai. 2023. Do language models know when they’re hallucinating references? arXiv preprint arXiv:2305.18248.
2309.01219#94
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
95
Kabir Ahuja, Rishav Hada, Millicent Ochieng, Prachi Jain, Harshita Diddee, Samuel Maina, Tanuja Ganu, Sameer Segal, Maxamed Axmed, Kalika Bali, et al. 2023. Mega: Multilin- gual evaluation of generative ai. arXiv preprint arXiv:2303.12528. Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, and Kelvin Guu. 2022. Tracing knowledge in lan- guage models back to the training data. arXiv preprint arXiv:2205.11482. Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hos- sein Babaei, Daniel LeJeune, Ali Siahkoohi, 21 and Richard G Baraniuk. 2023. Self-consuming arXiv preprint generative models go mad. arXiv:2307.01850. Amos Azaria and Tom Mitchell. 2023. The inter- nal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734.
2309.01219#95
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
96
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a help- ful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multi- modal evaluation of chatgpt on reasoning, hal- arXiv preprint lucination, and interactivity. arXiv:2302.04023. Steffen Bickel, Peter Haider, and Tobias Scheffer. 2005. Predicting sentences using n-gram lan- In Proceedings of human lan- guage models. guage technology conference and conference on empirical methods in natural language process- ing, pages 193–200.
2309.01219#96
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
97
Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Mil- lican, George Bm Van Den Driessche, Jean- Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by re- In Interna- trieving from trillions of tokens. tional conference on machine learning, pages 2206–2240. PMLR. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing sys- tems, 33:1877–1901. Deng Cai, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. 2021. Neural machine translation with monolingual translation memory. In Pro- ceedings of the 59th Annual Meeting of the As- sociation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7307–7318.
2309.01219#97
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
98
Language Processing (Volume 1: Long Papers), pages 7307–7318. Meng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020. Factual error correc- In tion for abstractive summarization models. Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 6251–6258. Yihan Cao, Yanbin Kang, and Lichao Sun. 2023. Instruction mining: High-quality instruction data selection for large language models. arXiv preprint arXiv:2307.06290. Kai-Wei Chang, Vinodkumar Prabhakaran, and Vicente Ordonez. 2019. Bias and fairness in natural language processing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP): Tutorial Abstracts. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xi- aoyuan Yi, Cunxiang Wang, Yidong Wang, A survey on evaluation of et al. 2023. arXiv preprint large language models. arXiv:2307.03109.
2309.01219#98
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
99
Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, and Kelvin Guu. 2023a. Purr: Efficiently editing language model hallucina- tions by denoising language model corruptions. arXiv preprint arXiv:2305.14908. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jil- iang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. Acm Sigkdd Explorations Newsletter, 19(2):25–35. Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, et al. 2023b. Alpagasus: Training a bet- arXiv preprint ter alpaca with fewer data. arXiv:2307.08701. I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, and Pengfei Liu. 2023. Factool: Factuality detection in generative ai – a tool augmented framework for multi-task 22 and multi-domain scenarios. arXiv:2307.13528. arXiv preprint
2309.01219#99
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
100
22 and multi-domain scenarios. arXiv:2307.13528. arXiv preprint Aakanksha Chowdhery, Sharan Narang, Jacob De- vlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, and Pengcheng He. 2023. Dola: Decoding by contrasting layers improves factuality in large language models. arXiv preprint arXiv:2309.03883. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Scaling instruction- Brahma, et al. 2022. arXiv preprint finetuned language models. arXiv:2210.11416. Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. 2023. Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281.
2309.01219#100
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
101
Mike Conover, Matt Hayes, Ankit Mathur, Jian- wei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introducing the world’s first truly open instruction-tuned llm. Leyang Cui, Yu Wu, Shujie Liu, and Yue Zhang. 2021. Knowledge enhanced fine-tuning for bet- ter handling unseen entities in dialogue genera- tion. In EMNLP. David Dale, Elena Voita, Loïc Barrault, and Marta R. Costa-jussà. 2023. Detecting and mit- igating hallucinations in machine translation: Model internal workings alone do well, sen- tence similarity even better. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 36–50. Association for Computa- tional Linguistics. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Em- pirical Methods in Natural Language Process- ing, pages 6491–6506.
2309.01219#101
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
102
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language In Proceedings of the 2019 understanding. the North American Chapter Conference of of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. 2023. Chain-of-verification reduces hallucination in large language models. arXiv preprint arXiv:2309.11495. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, A sur- arXiv preprint vey for in-context learning. arXiv:2301.00234. Wanyu Du, Vipul Raheja, Dhruv Kumar, and Zae Myung Kim, Melissa Lopez, Understanding it- Dongyeop Kang. 2022. erative revision from human-written text. arXiv preprint arXiv:2203.03802.
2309.01219#102
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
103
Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improv- ing factuality and reasoning in language mod- els through multiagent debate. arXiv preprint arXiv:2305.14325. Jinhao Duan, Hao Cheng, Shiqi Wang, Chenan Wang, Alex Zavalny, Renjing Xu, Bhavya Kailkhura, and Kaidi Xu. 2023. Shifting at- tention to relevance: Towards the uncertainty arXiv estimation of large language models. preprint arXiv:2307.01379. Esin Durmus, He He, and Mona T. Diab. 2020. FEQA: A question answering evaluation frame- work for faithfulness assessment in abstractive summarization. In Proceedings of the 58th An- nual Meeting of the Association for Computa- tional Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5055–5070. Association for Com- putational Linguistics. Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, and Siva Reddy. 2022. On the origin of hal- lucinations in conversational models: Is it the datasets or the models? In Proceedings of the 23
2309.01219#103
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
104
23 2022 Conference of the North American Chap- ter of the Association for Computational Lin- guistics: Human Language Technologies, pages 5271–5285. Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter. 2021. Evaluating groundedness in dialogue systems: The BEGIN benchmark. CoRR, abs/2105.00071. Alex Fabbri, Prafulla Kumar Choubey, Jesse Vig, Chien-Sheng Wu, and Caiming Xiong. 2022. Improving factual consistency in summariza- tion with compression-based post-editing. In Proceedings of the 2022 Conference on Empir- ical Methods in Natural Language Processing, pages 9149–9156. Chao Feng, Xinyu Zhang, and Zichu Fei. 2023. Knowledge solver: Teaching llms to search for domain knowledge from knowledge graphs. arXiv preprint arXiv:2309.03118. Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrat- ing (human) feedback for natural language gen- eration. arXiv preprint arXiv:2305.00955.
2309.01219#104
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
105
Leo Gao, John Schulman, and Jacob Hilton. 2022. Scaling laws for reward model overoptimiza- tion. Luyu Gao, Zhuyun Dai, Panupong Pasupat, An- thony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da- Cheng Juan, et al. 2023a. Rarr: Researching and revising what language models say, using In Proceedings of the 61st language models. Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 16477–16508. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations. arXiv preprint arXiv:2305.14627. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro- In Proceedings of the 55th Annual planners. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179–188.
2309.01219#105
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
106
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179–188. Ismael Garrido-Muñoz, Arturo Montejo-Ráez, Fernando Martínez-Santiago, and L Alfonso Ureña-López. 2021. A survey on bias in deep nlp. Applied Sciences, 11(7):3184. Yoav Goldberg. 2023. Reinforcement learning for language models. Github Blog. Zhibin Gou, Zhihong Shao, Yeyun Gong, Ye- long Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. 2023. Critic: Large language models can self-correct with tool-interactive critiquing. arXiv preprint arXiv:2305.11738. Nuno M Guerreiro, Duarte Alves, Jonas Walden- dorf, Barry Haddow, Alexandra Birch, Pierre Colombo, and André FT Martins. 2023a. Hal- lucinations in large multilingual translation models. arXiv preprint arXiv:2303.16104.
2309.01219#106
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
107
Nuno Miguel Guerreiro, Elena Voita, and André F. T. Martins. 2023b. Looking for a needle in a haystack: A comprehensive study of hallucina- tions in neural machine translation. In Proceed- ings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 1059–1075. Association for Computational Linguistics. Jihan Yin, and Erhan Bas. 2023. Detecting and preventing hallucinations in large vision language models. arXiv preprint arXiv:2308.06394. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neu- ral text degeneration. In International Confer- ence on Learning Representations. Jiayang Song, Zhijie Wang, Huaming Chen, and Lei Ma. 2023a. Look be- fore you leap: An exploratory study of uncer- tainty measurement for large language models. arXiv preprint arXiv:2307.10236.
2309.01219#107
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
108
24 Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2023b. Transformer-patcher: One mistake worth one neuron. arXiv preprint arXiv:2301.09785. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, An- drea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38. Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibra- tion of language models for question answer- ing. Transactions of the Association for Com- putational Linguistics, 9:962–977. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Lan- guage models (mostly) know what they know. arXiv preprint arXiv:2207.05221.
2309.01219#108
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
109
Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. 2023. Challenges and applications arXiv preprint of large language models. arXiv:2307.10169. Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Stanley, Duc, Oliver Richárd Nagyfi, Openassistant conversations– et al. 2023. democratizing large language model alignment. arXiv preprint arXiv:2304.07327. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluat- ing the factual consistency of abstractive text In Proceedings of the 2020 summarization. Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9332–9346. As- sociation for Computational Linguistics.
2309.01219#109
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
110
Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. Chatgpt be- yond english: Towards a comprehensive evalu- ation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613. Zhenzhong Lan, Mingda Chen, Sebastian Good- man, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self- supervised learning of language representa- tions. In International Conference on Learning Representations. Tamera Lanham, Anna Chen, Ansh Radhakrish- nan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hub- inger, Jackson Kernion, et al. 2023. Measur- ing faithfulness in chain-of-thought reasoning. arXiv preprint arXiv:2307.13702.
2309.01219#110
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
111
Angeliki Lazaridou, Elena Gribovskaya, Woj- ciech Stokowiec, and Nikolai Grigorev. 2022. Internet-augmented language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115. Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. Platypus: Quick, cheap, and pow- arXiv preprint 2023. erful arXiv:2308.07317. refinement of llms. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2019. Hallucinations in neural machine translation. Nayeon Lee, Wei Ping, Peng Xu, Mostofa Pat- wary, Pascale N Fung, Mohammad Shoeybi, Factuality en- and Bryan Catanzaro. 2022. hanced language models for open-ended text generation. Advances in Neural Information Processing Systems, 35:34586–34599.
2309.01219#111
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
112
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. Bart: Denoising sequence-to-sequence language generation, pre-training for natural In Proceed- translation, and comprehension. ings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 7871–7880. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020b. Retrieval- augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Pro- cessing Systems, 33:9459–9474. 25 Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. 2022a. A survey on retrieval- arXiv preprint augmented text generation. arXiv:2202.01110.
2309.01219#112
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
113
Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian- Yun Nie, and Ji-Rong Wen. 2023a. Halueval: A large-scale hallucination evaluation bench- mark for large language models. arXiv preprint arXiv:2305.11747. Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2022b. Pretrained lan- guage models for text generation: A survey. arXiv preprint arXiv:2201.05273. Kenneth Li, Oam Patel, Fernanda Viégas, and Martin Wattenberg. Hanspeter Pfister, 2023b. Inference-time intervention: Eliciting truthful answers from a language model. arXiv preprint arXiv:2306.03341. Miaoran Li, Baolin Peng, and Zhu Zhang. 2023c. Self-checker: Plug-and-play modules for fact- checking with large language models. arXiv preprint arXiv:2305.14623.
2309.01219#113
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
114
Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, and Qun Liu. 2022c. How pre- trained language models capture factual knowl- edge? a causal-inspired analysis. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1720–1732. Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, and Soujanya Poria. 2023d. Chain of knowledge: A framework for grounding large language mod- arXiv els with structured knowledge bases. preprint arXiv:2305.13269. Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023e. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355. Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Al- lie Del Giorno, Suriya Gunasekar, and Yin Tat Textbooks are all you need Lee. 2023f. arXiv preprint ii: phi-1.5 technical report. arXiv:2309.05463.
2309.01219#114
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
115
Zuchao Li, Shitou Zhang, Hai Zhao, Yifei Yang, and Dongjie Yang. 2023g. Batgpt: A bidirectional autoregessive talker from gener- arXiv preprint ative pre-trained transformer. arXiv:2307.00360. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. arXiv preprint arXiv:2305.20050. Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Text summa- rization branches out, pages 74–81. Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. Truthfulqa: Measuring how mod- els mimic human falsehoods. arXiv preprint arXiv:2109.07958. Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. 2023. Generating with confidence: Uncertainty quantification for black-box large language models. arXiv preprint arXiv:2305.19187.
2309.01219#115
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
116
Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D’Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer, Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering In International Conference on Ma- models. chine Learning, pages 13604–13622. PMLR. Jianfeng and Lijuan Wang. Wang, Yaser Yacoob, 2023a. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. arXiv preprint arXiv:2304.08485. Jerry Liu. 2022. LlamaIndex. Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, and Ji-Rong Wen. 2023c. Reta-llm: A retrieval-augmented large language model toolkit. arXiv preprint arXiv:2306.05212.
2309.01219#116
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
117
Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, 26 and Percy Liang. 2023d. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, and Bill Dolan. 2022. A token-level reference-free halluci- nation detection benchmark for free-form text generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6723–6737. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly opti- mized bert pretraining approach. arXiv preprint arXiv:1907.11692. and Fenglong Ma. 2023a. Zero-resource hallucination preven- tion for large language models. arXiv preprint arXiv:2309.02654.
2309.01219#117
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
118
and Fenglong Ma. 2023a. Zero-resource hallucination preven- tion for large language models. arXiv preprint arXiv:2309.02654. Zheheng Luo, Qianqian Xie, and Sophia Anani- adou. 2023b. Chatgpt as a factual inconsis- tency evaluator for abstractive text summariza- tion. arXiv preprint arXiv:2303.15621. Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023c. Augmented large lan- guage models with parametric knowledge guid- ing. arXiv preprint arXiv:2305.04757. Kelvin Luu, Daniel Khashabi, Suchin Gururan- gan, Karishma Mandyam, and Noah A Smith. 2022. Time waits for no one! analysis and In Pro- challenges of temporal misalignment. ceedings of the 2022 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 5944–5958.
2309.01219#118
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
119
Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. 2023. Video- chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424. Alex Mallen, Akari Asai, Victor Zhong, Ra- jarshi Das, Daniel Khashabi, and Hannaneh Ha- jishirzi. 2023. When not to trust language mod- Investigating effectiveness of parametric els: and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802–9822. Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hallucination detection for genera- arXiv preprint tive large language models. arXiv:2303.08896.
2309.01219#119
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
120
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On faithful- ness and factuality in abstractive summariza- tion. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Lin- guistics, ACL 2020, Online, July 5-10, 2020, pages 1906–1919. Association for Computa- tional Linguistics. Nick McKenna, Tianyi Li, Liang Cheng, Moham- mad Javad Hosseini, Mark Johnson, and Mark Steedman. 2023. Sources of hallucination by large language models on inference tasks. arXiv preprint arXiv:2305.14552. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual associations in gpt. Advances in Neu- ral Information Processing Systems, 35:17359– 17372. Kevin Meng, Arnab Sen Sharma, Alex Ando- nian, Yonatan Belinkov, and David Bau. 2022b. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229.
2309.01219#120
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
121
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. 2023. Augmented language models: a survey. arXiv preprint arXiv:2302.07842. Tomas Mikolov, Martin Karafiát, Lukas Bur- get, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. Makuhari. Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Os- car Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2021. Recent advances in natural lan- guage processing via large pre-trained language models: A survey. ACM Computing Surveys. 27
2309.01219#121
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
122
27 Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Ha- jishirzi. 2023. Factscore: Fine-grained atomic evaluation of factual precision in long form text generation. arXiv preprint arXiv:2305.14251. Eric Mitchell, Charles Lin, Antoine Bosselut, Christopher D Manning, and Chelsea Finn. 2022. Memory-based model editing at scale. In International Conference on Machine Learn- ing, pages 15817–15831. PMLR. Elaraby Mohamed, Lu Mengyin, Dunn Jacob, Zhang Xueying, Wang Yu, and Liu Shizhu. 2023. Halo: Estimation and reduction of hal- lucinations in open-source weak large language models. arXiv preprint arXiv:2308.11764. Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, and Yoav Shoham. 2023. Generating bench- marks for factuality evaluation of language models. arXiv preprint arXiv:2307.06908.
2309.01219#122
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
123
Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin Vechev. 2023. Self-contradictory hal- lucinations of large language models: Evalua- tion, detection and mitigation. arXiv preprint arXiv:2305.15852. Shikhar Murty, Christopher Manning, Scott Lund- berg, and Marco Tulio Ribeiro. 2022. Fixing model bugs with natural language patches. In Proceedings of the 2022 Conference on Empir- ical Methods in Natural Language Processing, pages 11600–11613. Reiichiro Nakano, Jacob Hilton, Suchir Bal- aji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. We- Browser-assisted question-answering bgpt: arXiv preprint with human feedback. arXiv:2112.09332. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural net- work based sequence model for extractive sum- marization of documents. In Proceedings of the AAAI conference on artificial intelligence. Courtney Napoles, Keisuke Sakaguchi, and Joel Jfleg: A fluency corpus and
2309.01219#123
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
124
Courtney Napoles, Keisuke Sakaguchi, and Joel Jfleg: A fluency corpus and benchmark for grammatical error correction. In Proceedings of the 15th Conference of the Eu- ropean Chapter of the Association for Compu- tational Linguistics: Volume 2, Short Papers, pages 229–234. Roberto Navigli, Simone Conia, and Björn Ross. 2023. Biases in large language models: Ori- gins, inventory and discussion. ACM Journal of Data and Information Quality. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gus- tavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. 2022. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 9844–9855. Association for Com- putational Linguistics.
2309.01219#124
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
125
Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, and Yu Wang. 2023. Skeleton-of- thought: Large language models can do parallel decoding. arXiv preprint arXiv:2307.15337. Yasumasa Onoe, Michael JQ Zhang, Shankar Pad- manabhan, Greg Durrett, and Eunsol Choi. 2023. Can lms learn new entities from de- scriptions? challenges in propagating injected knowledge. arXiv preprint arXiv:2305.01651. OpenAI. 2023a. ChatGPT. https:// openai.com/blog/chatgpt. OpenAI. 2023b. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feed- back. Advances in Neural Information Process- ing Systems, 35:27730–27744.
2309.01219#125
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
126
Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 1173–1186. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442. Adam Pauls and Dan Klein. 2011. Faster and smaller n-gram language models. In Proceed- ings of the 49th annual meeting of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 258–267. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cap- pelli, Hamza Alobeidli, Baptiste Pannier, Ebte- sam Almazrouei, and Julien Launay. 2023. The refinedweb dataset for falcon llm: outperform- ing curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
2309.01219#126
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
127
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. 2023a. Check your facts and try again: Improving large language models with external knowl- edge and automated feedback. arXiv preprint arXiv:2302.12813. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023b. In- arXiv preprint struction tuning with gpt-4. arXiv:2304.03277. Ethan Perez, Sam Ringer, Kamil˙e Lukoši¯ut˙e, Ka- rina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. 2022. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251. Xiao Pu, Mingqi Gao, and Xiaojun Wan. 2023. Summarization is (almost) dead. arXiv preprint arXiv:2309.09558.
2309.01219#127
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
128
Cheng Qian, Xinran Zhao, and Sherry Tong- shuang Wu. 2023. " merge conflicts!" ex- ploring the impacts of external distractors to parametric knowledge graphs. arXiv preprint arXiv:2309.08594. Shuofei Qiao, Honghao Gui, Huajun Chen, and Ningyu Zhang. 2023. Making language mod- els better tool learners with execution feedback. arXiv preprint arXiv:2305.13068. 28 Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. 2023. Tool learning with foundation models. arXiv preprint arXiv:2304.08354. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language pro- cessing: A survey. Science China Technolog- ical Sciences, 63(10):1872–1897.
2309.01219#128
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
129
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised mul- titask learners. OpenAI blog, 1(8):9. Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernan- dez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamil˙e Lukoši¯ut˙e, et al. 2023. Ques- tion decomposition improves the faithfulness of model-generated reasoning. arXiv preprint arXiv:2307.11768. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton- In-context Brown, and Yoav Shoham. 2023. arXiv retrieval-augmented language models. preprint arXiv:2302.00083.
2309.01219#129
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
130
Vipula Rawte, Prachi Priya, SM Tonmoy, SM Za- man, Amit Sheth, and Amitava Das. 2023. Ex- ploring the relationship between llm hallucina- tions and prompt linguistic nuances: Readabil- ity, formality, and concreteness. arXiv preprint arXiv:2309.11064. Clément Rebuffel, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere, and Patrick Gallinari. 2022. Controlling hal- lucinations at word level in data-to-text gener- ation. Data Mining and Knowledge Discovery, pages 1–37. Ruiyang Ren, Yuhao Wang, Yingqi Qu, Wayne Xin Zhao, Jing Liu, Hao Tian, Hua 29 Wu, Ji-Rong Wen, and Wang Haifeng. 2023. Investigating the factual knowledge boundary of large language models with retrieval aug- mentation. arXiv preprint arXiv:2307.11019. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 5418–5426.
2309.01219#130
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
131
Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in In- formation Retrieval, 3(4):333–389. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b-parameter open-access mul- arXiv preprint tilingual arXiv:2211.05100. John Schulman. 2023. Reinforcement learning from human feedback: Progress and challenges. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Prox- arXiv imal policy optimization algorithms. preprint arXiv:1707.06347. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H. Chi, Nathanael Schärli, and Denny Zhou. 2023a. Large lan- guage models can be easily distracted by irrel- In Proceedings of the 40th In- evant context. ternational Conference on Machine Learning, volume 202, pages 31210–31227.
2309.01219#131
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
132
Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. 2022. Natural language to code translation with ex- In Proceedings of the 2022 Confer- ecution. ence on Empirical Methods in Natural Lan- guage Processing, pages 3533–3546. Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, and Scott Wen-tau Yih. 2023b. Trusting your evidence: Halluci- nate less with context-aware decoding. arXiv preprint arXiv:2305.14739. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023c. Replug: Retrieval-augmented black-box language mod- els. arXiv preprint arXiv:2301.12652. Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuo- hang Wang, Jianfeng Wang, Jordan Boyd- Prompt- Graber, and Lijuan Wang. 2022. arXiv preprint ing gpt-3 to be reliable. arXiv:2210.09150.
2309.01219#132
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
133
Anton Sinitsin, Vsevolod Plokhotnyuk, Dmitriy Pyrkin, Sergei Popov, and Artem Babenko. 2020. Editable neural networks. arXiv preprint arXiv:2004.00345. Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. 2023. Pandagpt: One arXiv model to instruction-follow them all. preprint arXiv:2305.16355. Kai Sun, Yifan Ethan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. 2023a. Head-to-tail: How knowledgeable are large language models (llm)? aka will llms replace knowledge graphs? arXiv preprint arXiv:2308.10168. Tianxiang Sun, Yunfan Shao, Hong Qian, Xuan- jing Huang, and Xipeng Qiu. 2022. Black-box tuning for language-model-as-a-service. In In- ternational Conference on Machine Learning, pages 20841–20855. PMLR.
2309.01219#133
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
134
Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xi- aogui Yang, Lingling Wu, Zhangyue Yin, Xu- anjing Huang, and Xipeng Qiu. 2023b. Moss: Training conversational language models from synthetic data. Alex Tamkin, Kunal Handa, Avash Shrestha, and Noah Goodman. 2022. Task ambiguity in hu- mans and language models. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction- 2023. following llama model. https://github. com/tatsu-lab/stanford_alpaca. 30 Faraz Torabi, Garrett Warnell, and Peter Stone. 2018. Behavioral cloning from observation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4950–4957.
2309.01219#134
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
135
Faraz Torabi, Garrett Warnell, and Peter Stone. 2018. Behavioral cloning from observation. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4950–4957. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Pe- ter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. and Logesh Kumar Umapathi, Ankit Pal, Malaikannan Sankarasubbu. 2023. Med- halt: Medical domain hallucination test arXiv preprint for large language models. arXiv:2307.15343.
2309.01219#135
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
136
Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, and Dong Yu. 2023. A stitch in time saves nine: Detecting and mitigating hallu- cinations of llms by validating low-confidence generation. arXiv preprint arXiv:2307.03987. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. Advances in neural in- formation processing systems, 30. Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. arXiv preprint arXiv:2005.03642. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023a. Voy- ager: An open-ended embodied agent with arXiv preprint large language models. arXiv:2305.16291.
2309.01219#136
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
137
Hongmin Wang. 2019. Revisiting challenges in data-to-text generation with fact grounding. In Proceedings of the 12th International Confer- ence on Natural Language Generation, pages 311–322. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought rea- soning in language models. In The Eleventh In- ternational Conference on Learning Represen- tations. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. 2023b. How far can camels go? exploring the state of instruc- tion tuning on open resources. arXiv preprint arXiv:2306.04751.
2309.01219#137
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
138
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c. Self-instruct: Aligning language models with self-generated In Proceedings of the 61st An- instructions. nual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 13484–13508. Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023d. Unleashing cognitive synergy in large language models: A task-solving agent through multi- arXiv preprint persona self-collaboration. arXiv:2307.05300. Alexander Wei, Nika Haghtalab, and Jacob Stein- hardt. 2023a. Jailbroken: How does llm safety training fail? arXiv preprint arXiv:2307.02483. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. In In- ternational Conference on Learning Represen- tations.
2309.01219#138
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
139
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Pro- cessing Systems, 35:24824–24837. Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Quoc V Le. 2023b. Simple synthetic data reduces sycophancy in large language models. arXiv preprint arXiv:2308.03958. 31 Alexander R Fabbri Chien-Sheng Wu and Wen- hao Liu Caiming Xiong. 2023. Qafacteval: Im- proved qa-based factual consistency evaluation for summarization. Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shu- jie Liu, Bo Ren, Linquan Liu, et al. 2023a. On decoder-only architecture for speech-to-text and large language model integration. arXiv preprint arXiv:2307.03917.
2309.01219#139
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
140
Weiqi Wu, Chengyue Jiang, Yong Jiang, Pengjun Xie, and Kewei Tu. 2023b. Do plms know and understand ontological knowledge? arXiv preprint arXiv:2309.05936. Yijun Xiao and William Yang Wang. 2021. On hallucination and predictive uncertainty in con- ditional language generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2734–2744. Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su. 2023. Adaptive chameleon or stub- born sloth: Unraveling the behavior of large language models in knowledge conflicts. arXiv preprint arXiv:2305.13300. Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023. Can llms express their uncertainty? an empir- ical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063.
2309.01219#140
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
141
Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An open-source chat model with parameter-efficient tuning on self- chat data. arXiv preprint arXiv:2304.01196. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Repre- sentations. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, An- wen Hu, Pengcheng Shi, Yaya Shi, et al. 2023. mplug-owl: Modularization empowers large arXiv language models with multimodality. preprint arXiv:2304.14178. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. 2023. Do large language models know what they don’t know? arXiv preprint arXiv:2305.18153.
2309.01219#141
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
142
Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zi- jun Yao, Xiaohan Zhang, Hanming Li, et al. 2023a. Kola: Carefully benchmarking world arXiv knowledge of large language models. preprint arXiv:2306.09296. Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, and Ashish Sabharwal. 2023b. Improving language models via plug-and- arXiv preprint play retrieval arXiv:2305.14002. Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, and Huan Sun. 2023. Automatic eval- uation of attribution by large language models. arXiv preprint arXiv:2305.06311. Sina Zarrieß, Henrik Voigt, and Simeon Schüz. 2021. Decoding methods in neural language generation: a survey. Information, 12(9):355.
2309.01219#142
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
143
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained In The Eleventh International Confer- model. ence on Learning Representations. Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023. AlignScore: Evaluating factual con- sistency with a unified alignment function. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 11328–11348. Muru Zhang, Ofir Press, William Merrill, Al- isa Liu, and Noah A Smith. 2023a. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534. Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, et al. 2023b. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792.
2309.01219#143
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
144
Shuo Zhang, Liangming Pan, Junzhou Zhao, and William Yang Wang. 2023c. Mitigating language model hallucination with interactive question-knowledge alignment. arXiv preprint arXiv:2305.13669. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In Inter- national Conference on Learning Representa- tions. Xuchao Zhang, Menglin Xia, Camille Couturier, Guoqing Zheng, Saravan Rajmohan, and Vic- tor Ruhle. 2023d. Hybrid retrieval-augmented generation for real-time composition assistance. arXiv preprint arXiv:2308.04215. Ruochen Zhao, Xingxuan Li, Shafiq Joty, Cheng- wei Qin, and Lidong Bing. 2023a. Verify-and- edit: A knowledge-enhanced chain-of-thought framework. arXiv preprint arXiv:2305.03268. Theodore Zhao, Mu Wei, J Samuel Preston, and Hoifung Poon. 2023b. Automatic calibration and error correction for large language mod- els via pareto optimal self-supervision. arXiv preprint arXiv:2306.16564.
2309.01219#144
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
145
Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and Ji- Rong Wen. 2022. Dense text retrieval based on pretrained language models: A survey. arXiv preprint arXiv:2211.14876. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023c. A survey of large language models. arXiv preprint arXiv:2303.18223. Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang. 2023a. Can we edit factual knowl- arXiv preprint edge by in-context learning? arXiv:2305.12740. Rui Zheng, Shihan Dou, Songyang Gao, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Limao Xiong, Lu Chen, et al. 2023b. Se- crets of rlhf in large language models part i: Ppo. arXiv preprint arXiv:2307.04964.
2309.01219#145
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
146
Shen Zheng, Jie Huang, and Kevin Chen-Chuan Chang. 2023c. Why does chatgpt fall short in providing truthful answers. arXiv preprint arXiv:2304.10513. 32 Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir R. Radev. 2021. Qm- sum: A new benchmark for query-based multi- In Proceed- domain meeting summarization. ings of the 2021 Conference of the North Amer- ican Chapter of the Association for Compu- tational Linguistics: Human Language Tech- nologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5905–5921. Association for Com- putational Linguistics. Zexuan Zhong, Zhengxuan Wu, Christopher D Manning, Christopher Potts, and Danqi Chen. 2023. Mquake: Assessing knowledge edit- ing in language models via multi-hop questions. arXiv preprint arXiv:2305.14795.
2309.01219#146
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
147
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023a. Lima: arXiv preprint Less is more for alignment. arXiv:2305.11206. Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023b. Context-faithful prompt- ing for large language models. arXiv preprint arXiv:2303.11315. Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, and Meng Jiang. 2021. Enhancing factual con- sistency of abstractive summarization. In Pro- ceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 718–733.
2309.01219#147
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.01219
148
Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. Promptbench: Towards evaluating the ro- bustness of large language models on adversar- ial prompts. arXiv preprint arXiv:2306.04528. Andy Zou, Zifan Wang, J Zico Kolter, and Matt Fredrikson. 2023. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043. 33
2309.01219#148
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.
http://arxiv.org/pdf/2309.01219
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, Shuming Shi
cs.CL, cs.AI, cs.CY, cs.LG
work in progress; 32 pages
null
cs.CL
20230903
20230924
[ { "id": "2307.03109" }, { "id": "2306.05424" }, { "id": "2305.20050" }, { "id": "2308.06394" }, { "id": "2306.16564" }, { "id": "2307.03917" }, { "id": "2305.11747" }, { "id": "2305.10355" }, { "id": "2308.07317" }, { "id": "2308.10168" }, { "id": "2305.06311" }, { "id": "2307.10169" }, { "id": "2307.15043" }, { "id": "2301.00234" }, { "id": "2305.03268" }, { "id": "2307.15343" }, { "id": "2303.16104" }, { "id": "2309.03118" }, { "id": "2307.11768" }, { "id": "2309.09558" }, { "id": "2305.13300" }, { "id": "2211.05100" }, { "id": "2305.14627" }, { "id": "2305.19187" }, { "id": "2004.00345" }, { "id": "2307.13528" }, { "id": "2210.09150" }, { "id": "2307.04964" }, { "id": "2203.05115" }, { "id": "2309.05936" }, { "id": "2305.11738" }, { "id": "2306.09296" }, { "id": "2309.02654" }, { "id": "2305.14795" }, { "id": "2305.14325" }, { "id": "2203.03802" }, { "id": "2305.14623" }, { "id": "2309.05463" }, { "id": "2308.10792" }, { "id": "2307.10236" }, { "id": "2302.13971" }, { "id": "2308.11764" }, { "id": "2309.11064" }, { "id": "2305.13281" }, { "id": "2306.03341" }, { "id": "2112.09332" }, { "id": "2307.01379" }, { "id": "2309.08594" }, { "id": "2304.05613" }, { "id": "2303.15621" }, { "id": "2301.12652" }, { "id": "2307.06908" }, { "id": "2307.02483" }, { "id": "2304.14178" }, { "id": "2305.13534" }, { "id": "2303.12528" }, { "id": "2306.13063" }, { "id": "2305.18248" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "2005.03642" }, { "id": "2306.05212" }, { "id": "2305.13269" }, { "id": "2305.14908" }, { "id": "2307.11019" }, { "id": "2307.00360" }, { "id": "2304.08354" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2305.14002" }, { "id": "2303.18223" }, { "id": "2307.03172" }, { "id": "2307.03987" }, { "id": "2301.09785" }, { "id": "2302.04023" }, { "id": "2210.07229" }, { "id": "2307.05300" }, { "id": "2306.04528" }, { "id": "2305.01651" }, { "id": "1907.11692" }, { "id": "2304.03277" }, { "id": "2305.13669" }, { "id": "2307.06290" }, { "id": "2304.01196" }, { "id": "2109.07958" }, { "id": "2309.03883" }, { "id": "2302.07842" }, { "id": "2307.01850" }, { "id": "2305.14251" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2304.07327" }, { "id": "2308.04215" }, { "id": "2306.14565" }, { "id": "2307.15337" }, { "id": "2308.03958" }, { "id": "2306.04751" }, { "id": "2302.00083" }, { "id": "2305.16355" }, { "id": "2305.14552" }, { "id": "2305.13068" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2202.01110" }, { "id": "2307.13702" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2305.12740" }, { "id": "2309.11495" }, { "id": "2305.15852" }, { "id": "2303.08896" }, { "id": "2305.00955" }, { "id": "2304.10513" }, { "id": "2201.05273" }, { "id": "2307.08701" }, { "id": "2205.11482" }, { "id": "2305.04757" }, { "id": "2304.13734" }, { "id": "2304.03442" }, { "id": "2212.09251" }, { "id": "2305.14739" }, { "id": "2305.18153" }, { "id": "2211.14876" }, { "id": "2303.11315" }, { "id": "2305.11206" }, { "id": "2307.16877" }, { "id": "2302.12813" } ]
2309.00986
1
# ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models Chenliang Li, Hehong Chen, Ming Yan∗, Weizhou Shen, Haiyang Xu, Zhikai Wu Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi Ji Zhang, Fei Huang, Jingren Zhou DAMO Academy, Alibaba Group, China # Abstract Large language models (LLMs) have recently demonstrated remarkable capabilities to com- prehend human intentions, engage in reason- ing, and design planning-like behavior. To further unleash the power of LLMs to accom- plish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs.
2309.00986#1
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
2
In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open- source LLMs as controllers. It provides a user- friendly system library, with customizable en- gine design to support model training on mul- tiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a compre- hensive framework has been proposed span- ning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent as- sistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library1 and online demo2 are now publicly available. spite the rapid advancements of open-source LLMs, e.g., LLaMA (Touvron et al., 2023) and Chat- GLM (THUDM, 2023), they still remain limited in performing complex tasks, such as following user instructions to use external tools and capture up-to-date information.
2309.00986#2
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
3
To further unleash the power of LLMs for real- world practical applications, a rising trend of cur- rent research (Schick et al., 2023; Shen et al., 2023; Yang et al., 2023; Qin et al., 2023; Patil et al., 2023) begins to enable LLMs with tool-use abilities to- wards building an AI Agent. These include Hug- gingGPT (Shen et al., 2023), Visual-ChatGPT (Wu et al., 2023) and Gorilla (Patil et al., 2023) for connecting with HuggingFace models, ToolAl- paca (Tang et al., 2023) and ToolLLaMA (Qin et al., 2023) for using massive common APIs such as weather forecast and search engine. These methods either directly rely on closed-source counterparts like ChatGPT or focus on certain types of API tools. Recently, there have also been public releases of AI agents, such as Auto-GPT3, LangChain4 and Transformers Agent (Huggingface, 2023), which enable LLMs, such as ChatGPT or GPT-4, to use tools and solve complex AI tasks. However, these agents are mainly built with closed-source LLMs and how to build a customizable agent system with open-source LLMs remains largely unexplored. # Introduction
2309.00986#3
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
4
# Introduction Large language models (OpenAI, 2022, 2023; Touvron et al., 2023; Chowdhery et al., 2022) have gradually become common AI assistants that demonstrate great potential in comprehend- ing human intentions, performing complex rea- soning tasks, and enabling content creation. DeIn this work, we present ModelScope-Agent, a general and customizable agent system for real- world applications, based on open-source LLMs as controllers. ModelScope5 is a public ML com- munity, which seeks to bring together the most ad- vanced machine learning models from the AI com- munity, and streamlines the process of leveraging AI models in real-world applications. ModelScope- Agent provides a flexible and user-friendly sys- tem library, with customizable engine design to ∗Corresponding author: <[email protected]> 1https://github.com/modelscope/modelscope-agent 2https://modelscope.cn/studios/damo/ModelScopeGPT/summary 3https://github.com/Significant-Gravitas/Auto-GPT 4https://github.com/langchain-ai/langchain 5https://modelscope.cn/models
2309.00986#4
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
5
support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a uni- fied way. It features an LLM-centric system de- sign, which includes open-source LLMs as core controller, and further interact with a tool-use mod- ule and a memory module to accomplish complex tasks. At the core of ModelScope-Agent , the li- brary supports flexible selection and training on var- ious open-source LLMs, such as LLaMA (Touvron et al., 2023), ChatGLM (THUDM, 2023), Chat- PLUG (Tian et al., 2023) and other customized LLMs in ModelScope. For tool use, ModelScope- Agent provides a default tool library, which sup- ports diverse AI model APIs across NLP, CV, Au- dio and Multi-model fields, as well as massive com- mon APIs such as search engine. It also supports registering new self-defined API plugins and auto- matic API retrieval from the large tool library. It is easy for users to customize their most appropriate LLMs, local API tools and functions to develop real-world applications. Moreover, a memory mod- ule is also introduced to better store and manage the system message, user history, in-context examples, tool message and localized knowledge.
2309.00986#5
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
6
To enable the open-source LLMs to better con- trol the whole agent system, we further propose a comprehensive framework of tool-use data col- lection, customized model training, evaluation and deployment. Notably, we release a comprehen- sive tool-enhanced dataset MSAgent-Bench, which consists of 598k dialogues with various API cat- egories, multi-turn API calls, API-Oriented QA, and API-Agnostic instructions in both English and Chinese. A simple training strategy of Weighted LM, that enhances the training of generation of API name and parameters, is used to better ensure the correctness of API calls. Besides, an evalua- tion framework is also supported in our library to examine the tool-use abilities of the trained mod- els in different aspects. Furthermore, we applied ModelScope-Agent in a real-world application of ModelScope Community namely ModelScopeGPT, which is able to connect open-source LLMs with more than 1000 public AI models and access lo- calized community knowledge in ModelScope for community QA. To summarize, ModelScope-Agent is a general and customizable agent system designed for devel- opers to harness the power of open-source LLMs. The library targets the following goals:
2309.00986#6
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
7
To summarize, ModelScope-Agent is a general and customizable agent system designed for devel- opers to harness the power of open-source LLMs. The library targets the following goals: • Agent based on Open-Source LLMs: the con- troller of ModelScope-Agent can be flexibly selected from open-source LLMs that are opti- mized through our agent training framework. • Support and Customization of Diverse Tools: Dozens of diverse model APIs and common APIs are given by default. The library sup- ports registering new self-defined APIs and automatic API retrieval from the toolset. • Customizable of Applications: ModelScope- Agent can be flexibly applied in various in- dustry applications. The agent and training framework are documented describing its us- age, construction and optimization. ModelScope-Agent is in continual development by the engineers at ModelScope and is released under an Apache 2.0 license. Full documentation is available through the project website. # 2 The ModelScope Agent
2309.00986#7
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
8
ModelScope-Agent is in continual development by the engineers at ModelScope and is released under an Apache 2.0 license. Full documentation is available through the project website. # 2 The ModelScope Agent ModelScope-Agent is designed to facilitate devel- opers in building customizable agent systems based on open-source LLMs. The overall system architec- ture is shown in Figure 1. It includes open-source LLMs as controller, a tool-use module and a mem- ory module to interact with. Given human instruc- tion, the Agent, which adopts the selected LLM as the controller, will automatically plan tasks, selec- tively uses tools, leverage knowledge in memory, and finally provides helpful responses to users. # 2.1 LLMs as Brain
2309.00986#8
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
9
# 2.1 LLMs as Brain LLMs serve as the brain of the agent, responsible for planning and decomposing user requests, se- lectively calling tools, performing retrieval, and integrating all the information from previous steps to generate the final response. In order to make it easier for users to customize the agent with their own LLMs, we have added support for various open-source LLMs by default, such as LLaMA, ChatGLM and ChatPLUG, which have been op- timized through our tool learning pipeline. The details of training strategy and tool-use datasets can be referred to Section 3. ModelScope-Agent has integrated the LLM inference pipeline of the ModelScope community, and replacing LLMs can be done by simply setting the model_name and model_config. In model_config, the model_id, model_revision, and model parameter settings such as max sequence length, should be configured.
2309.00986#9
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
10
Training Framework Agent Pipeline Memory Control . Agent Execution . Data Collection Knowledge Retrieval Prompt Generator + Syster t + Model API Tool Retrieval API schemes + Common API + Knowledge + API-Oriented QA Dm, : + API-Agnostic ma yy Memory Control LLM as ~ Brain System Module oe + Tool Use LLM Training fm : F % te : i Task Planning Tool Library @: chateum Deploy ~ Al Models Common APIs + Text-to-Ir + Weather Weighted LM Tool Use “Textto-video. *Web-Search *Textto-Audio + Calculator ol Chat + Mi bs ~ “Text translation Music-Player Evaluation + Universal IE + Shopping API Execution & . & + Automatic Eval + + Rouge-L + FL + Human Eval Response Generation Tool Retrieval Tool Customization Figure 1: The overall system architecture of ModelScope-Agent. # LLM config " cfg_file " from modelscope . utils . config import Config model_cfg = Config . from_file ( cfg_file ) llm = LocalLLM ( model_name , model_cfg ) Furthermore, the ModelScope-Agent also pro- vides a standard way to integrate new LLM. Users can add their own LLMs, by integrating the LLM pipeline into ModelScope. After that, the agent can select the new LLMs for training and inference. # 2.2 Tool Use
2309.00986#10
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
11
# 2.2 Tool Use Tool Library The tool library is used to config- ure and manage various collections of APIs used in the agent. ModelScope-Agent can support a wide range of both common APIs such as search APIs, and AI model APIs across NLP, CV, Audio and Multi-modal models in ModelScope and Hugging- Face. Each tool API consists of the API name, de- scription, parameters and request functions. Users can easily choose and configure proper APIs in the library to build their own agent. The default APIs supported in the library can be referred to Appendix A.1. # tool default config file " default_file " tool_cfg = Config . from_file ( default_file ) request functions. More details about CustomTool can be referred in Appendix A.2. from modelscope_agent . tools import Tool class CustomTool ( Tool ): # logic added here # refer example in Appendix A .2 tool_list = { ’ customo - tool ’: CustomTool ()}
2309.00986#11
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
12
Tool Retrieval and Execution Due to the large amount of tool APIs in the tool library, a tool retrieval module is further introduced to recom- mend appropriate APIs for each instruction prompt. Specifically, we use the dense vector retrieval method based on the unified multilingual text- embedding API 6. We vectorize both the text de- scriptions of the APIs and the instruction prompt using the text-embedding API. The top-3 most rel- evant APIs with the highest vector product scores are selected for tool use. As a result, the schema information of the retrieved APIs will be concate- nated with other system prompts in the subsequent memory module and sent to LLMs as input. With the concatenated instruction prompt, the LLMs will plan and generate the API request, which will be executed by the agent. The agent will then return the results to the LLMs for continuous generation. Register and Customize New Tool The agent allows users to register and customize new tools, while also supporting quick integration of newly registered tools into the agent, enabling LLMs to selectively use the additional self-defined tools for specific applications. This can be simply done by inheriting from a base class, namely Tool, and defining a new CustomTool with the API-related schema of API name, description, parameters, and # 2.3 Memory Control
2309.00986#12
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
13
# 2.3 Memory Control The memory module is used to retrieve, and assem- ble a series of contextual information as input to the LLMs. It consists of a knowledge retrieval submod- ule and a prompt generator submodule, which are responsible for external knowledge retrieval and instruction prompt generation, respectively. 6https://help.aliyun.com/zh/dashscope/getting-started-1 Knowledge Retrieval It enables the agent to get access to up-to-date and localized information related with query prompt, thereby augmenting LLMs with dynamic and domain-specific knowl- edge. We follow the same dense vector retrieval method as the previous tool retrieval module, and support large-scale knowledge retrieval from local- ized document corpus. Similarly, it allows users to customize by changing to other open-source re- trieval frameworks. Prompt Generator The prompt generator is used to assemble all available contextual information such as system prompt, API schema, retrieved knowledge, conversation history, and few-shot ex- amples. According to the type of user query and the maximum length of the LLM, the users can selectively choose proper contextual information and assemble the required input to the LLM. In our agent, the prompt generator needs to be defined before the agent is constructed. # 2.4 Agent Pipeline
2309.00986#13
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
14
# 2.4 Agent Pipeline In summary, we build the agent by combining all the modules: LLM controller, tool-use module, and memory module. With agent.run, the agent can ef- ficiently execute and complete the instruction in a one-step generation. First, the agent retrieves query-related tools through the tool retrieval and combines the retrieved API schema with other con- textual prompts in memory module, to construct a new instruction prompt. Then, the agent sends this new prompt to the LLM, who plans whether and which API to call and generate an API request. Next, the agent will execute the selected API with the extracted API parameters and return the API results to the LLMs, which will continue to plan whether to call other APIs. If another API call is needed, the process is repeated, otherwise, the LLMs generate the final response and the agent returns the final result to the user. agent = AgentExecutor ( llm , tool_cfg , additional_tool_list = tool_list ) agent . run (" Draw a logo image of agent " ) # 3 Training # 3.1 Dataset To facilitate building an agent with the ability to use tools while upholding an optimal level of user engagement, we release a comprehensive tool
2309.00986#14
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
15
dataset, MSAgent-Bench7, utilizing ChatGPT syn- thetic data and the existing instruction-following datasets. Our released dataset encompasses 598k dialogues. Table 1 outlines the key differences between the released dataset and other public avail- able tool learning datasets, while the data distribu- tion of our dataset is illustrated in Figure 2. As demonstrated in the Table and Figure, we have made certain efforts to construct a comprehensive dataset which enables the effective training of an agent: Multilingual: We collect instances in both Chi- nese and English, ensuring that the trained agent is capable of functioning in both languages. Various API Categories: Our dataset supports Common APIs that have been registered by users or applied through online API platforms, as well as model APIs that can call neural models. Multi Turn Dialog: In real-life scenarios, agents may need to request more specific clarification from users to complete a task or receive additional instructions after completing a previous task. Our dataset accounts for these scenarios and supports multi-turn user-agent interactions when using tools. API-Oriented QA: An effective agent should pos- sess knowledge of APIs. Our
2309.00986#15
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
16
scenarios and supports multi-turn user-agent interactions when using tools. API-Oriented QA: An effective agent should pos- sess knowledge of APIs. Our dataset incorporates API document QA tasks and task planning tasks which requires agents to offer appropriate sugges- tions to users on how to use various APIs to solve complex tasks. API-Agnostic Instructions: To enhance the agent’s ability to follow common instructions and increase user engagement, we have incorporated both Chinese and English API-agnostic instructions within our dataset. These instructions place greater emphasis on the agent’s inherent capabilities rather than reliance on API invocation.
2309.00986#16
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
17
The data was collected by prompting ChatGPT (gpt-3.5-turbo) to generate instructions, API re- quests, and answers based on the API calling re- sults, more details can be accessed in Appendix D. # 3.2 Model Training We use the MSAgent-Bench to fine-tune multi- ple open-source LLMs, including LLaMA (Tou- vron et al., 2023), Qwen (QwenLM, 2023), Chat- PLUG (Tian et al., 2023) etc. We train all the open- source LLMs in a multi-round conversation mode and concatenate all the prompts and answers. Com7https://modelscope.cn/datasets/damo/MSAgent- Bench/summary
2309.00986#17
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
18
Dataset API-Bank (Li et al., 2023) ToolAlpaca (Tang et al., 2023) Gorilla (Patil et al., 2023) GPT4Tools (Yang et al., 2023) ToolBench (Qin et al., 2023) MSAgent-Bench (ours) Language English English English English English Instance Type Tool Use Tool Use Tool Use Tool Use Tool Use English + Chinese Tool Use + Common Chat # Instances 264 3.9 K 16.4 k 71.4 K 26.9 K 598 K API type Common API Common API Model API Model API Common API Common API + Model API Avg. Turn Avg. Step 3.27 1 1 1 1 1.52 1.92 1.66 1 1 4.1 1.31 Table 1: The statistics of MSAgent-Bench and other existing tool learning datasets. Model API + Text-to-Image Text-to-Video Text-to-Audio Translation Image Chat Universal IE ‘Common API Bench Weather Web Search Calculator Map ® 7 MSAgent- API-Oriented QA Document QA Task Planning API-Agnostic Instructions Story Generation Open QA Code Chit Chat Paraphrase STEM Role Play Figure 2: The instance types and distribution of our collected MSAgent-Bench.
2309.00986#18
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
19
Figure 2: The instance types and distribution of our collected MSAgent-Bench. pared to common instruction tuning data, the tool learning samples focus more heavily on the accu- racy of tool selection and API parameter prediction. Therefore, we propose a simple training strategy, Weighted LM, which enhances the training of gen- eration of API name and parameters, while zero- out the loss of tokens from the user prompt and the tool execution. More details can be be referred to Appendix B.3. kwargs = dict ( model = model , ...) trainer : EpochBasedTrainer = build_trainer ( name = args . trainer , default_args = kwargs ) trainer . train () measures the similarity between the generated re- sponse and the gold answer. Additionally, we intro- duce a novel metric called Argument F1 for fully evaluating the quality of API requests. To com- pute Argument F1, we categorize the arguments in agent’s API request into two cases, namely Half match (HM) and Full match (FM), representing correct argument but with wrong value and correct argument with correct value, respectively. Suppose the gold argument number in the API is |A|, and the number of arguments in the agents API request is |A∗|, we compute the new Recall and Precision as follows: # 4 Evaluation
2309.00986#19
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
20
# 4 Evaluation Our evaluation system, MSAgent-Eval, comprises two modules: an automatic evaluation framework which comprehensively evaluates API usability of the agents, and a human evaluation framework im- plemented by an agent arena which reflects the preferences of human users. # 4.1 Automatic Evaluation Framework R = (0.5 × # HM + # FM)/|A| P = (0.5 × # HM + # FM)/|A∗| and the final argument F1 is computed as: F 1 = 2(R ∗ P )/(R + P ). (3) A sample code for the automated evaluation of agents is provided below: from tool_agent_finetune import evaluation EM , F1 , ROUGE = evaluation ( refs , preds ) In automatic evaluation, we mainly focus on eval- uating agent’s ability to generate accurate API re- quest and the proper answers according to the API calling results. Specifically, we use the action ex- actly match score (Action EM) which measures whether the agent uses the correct API as the ref- erence gold API, and the ROUGE-L score which
2309.00986#20
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
21
Expert annotators were engaged to annotate the evaluation instances, with the task of providing diverse instructions, manually documenting cor- rect API calling requests, and writing appropriate responses. The statistics of our currently assem- bled test data is in Appendix B.1, and the auto- matic evaluation scores of our trained agents can (1) (2) (a) ModelScope Intelligent Assistant (b) Register and Use New Tools on Alibaba Cloud Please help me renew an ECS instance with the instance ID of i 1J90a7e840yScsv9nh2a for 10 months. lam happy to help you renew your ECS instance with the instance ID of - 1190a7e840yScsvnh2a for 10 months. The renewal process has been successfully completed. ‘Another ECS instance, I: -1i90a7e840yScsv9nh4b, for 12 months. lam happy to help you renew your ECS instance with the instance ID of - 1)90a7eB40yScsvSnh4b for 12 months. The renewal process has been successfully completed a Figure 3: Demo cases of ModelScopeGPT based on ModelScope-Agent . be found in Appendix B.2. We also guarantee the users to upload their own annotated test examples to accurately evaluate the performance of agents in customized scenarios.
2309.00986#21
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
22
be found in Appendix B.2. We also guarantee the users to upload their own annotated test examples to accurately evaluate the performance of agents in customized scenarios. call parameters using information from previous conversations. More cases can refer to Appendix C. As a result, ModelScopeGPT has achieved a total request number of over 170k from 40k user visits within one month after its release. # 4.2 Human Evaluation with Agent Arena Inspired by the Arena for ChatBots (Zheng et al., 2023), we have built an accessible Agent Arena 8 that allows users to furnish instructions to two anonymous agents, based on the provided APIs. Subsequently, users have the opportunity to vote on which Agent performs better in tackling the in- struction with the given APIs. In accordance with the framework presented by Zheng et al. (2023), we adopt a system of ELO ratings and leaderboard maintenance for the participating Agents. # 5 Usage Example of ModelScopeGPT In this section, we showcase a successful application of ModelScope Community, Mod- elScopeGPT9, based on our ModelScope-Agent.
2309.00986#22
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
23
# 5 Usage Example of ModelScopeGPT In this section, we showcase a successful application of ModelScope Community, Mod- elScopeGPT9, based on our ModelScope-Agent. ModelScope Intelligent Assistant Based on ModelScope-Agent , we have developed an intel- ligent assistant for the ModelScope Community, namely ModelScopeGPT. It uses LLMs as a con- troller to connect dozens of domain-specific AI models in the ModelScope open-source community, covering NLP, CV, Audio, and Multi-Modal fields. To make the pipeline more practical, we have in- cluded API retrieval and knowledge retrieval tool to automatically select proper APIs and get access to the local ModelScope knowledge. As shown in Fig- ure 3a, ModelScopeGPT can support API calls in multi-turn conversations and generate correct API 8https://modelscope.cn/studios/LLMZOO/Chinese- Arena/summary 9https://modelscope.cn/studios/damo/ModelScopeGPT /summary
2309.00986#23
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
24
9https://modelscope.cn/studios/damo/ModelScopeGPT /summary Register and Use New Tools Another key fea- ture of an agent is its generalization capability to unseen APIs. This allows users to quickly register their own APIs and customize their specific applica- tions. Therefore, we test the generalization ability of ModelScopeGPT by applying it to an Alibaba Cloud application scenario. As shown in Figure 3b, we first found an API for renewing an ECS in- stance on Alibaba Cloud. Then, we registered the API schema defined in the tool library to the agent. Finally, we entered the prompt "Please help me re- new an ECS..." in the demo. The agent generated a request through planning, selected the appropriate API, called the API to renew the instance success- fully, and provided a reply to inform the user that the renewal was completed. This test demonstrates that the open-source LLM optimized based on the released API dataset has a strong generalization ability towards unseen APIs. # 6 Conclusion
2309.00986#24
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
25
# 6 Conclusion ModelScope-Agent aims to facilitate building AI Agent applications and research based on open- source LLMs by providing a general and customiz- able agent framework covering flexible system de- sign, data collection, model training, evaluation and usage example in real-world application. It provides an open-source, community-driven library towards AI Agent learning and best practices for building an agent system with open-source LLMs. We hope ModelScope-Agent can help pave the way towards a new era of AI Agent. # Ethics Statement Intended Use. ModelScope-Agent is designed to facilitate building AI Agent applications and research based on open-source LLMs, by providing a general and customizable agent system. Potential Misuse. Although we have only trained with the tool-use datasets and gone through certain data filtering rules, it is still possible that the cus- tomized model may generate some biased, fake, and unsafe information. Our agent framework also provides users with the freedom to select proper LLMs and upload their own clean data for training. It is also important to design specific methods to improve the safety of the agent framework in the future. # References
2309.00986#25
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
26
# References Michael Ahn, Anthony Brohan, Noah Brown, Yev- gen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jes- month, Nikhil J Joshi, Ryan Julian, Dmitry Kalash- nikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Pe- ter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nico- las Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. 2022. Do as i can, not as i say: Grounding language in robotic affor- dances. arXiv preprint arXiv:2204.01691.
2309.00986#26
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
27
Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, et al. 2023. Falcon- 40b: an open large language model with state-of-the- art performance. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
2309.00986#27
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
30
Jordan Hoffmann, Sebastian Borgeaud, Arthur Men- sch, Elena Buchatskaya, Trevor Cai, Eliza Ruther- ford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Train- ing compute-optimal large language models. arXiv preprint arXiv:2203.15556. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Tomas Jackson, Noah Brown, Linda Luu, Sergey Levine, Karol Hausman, and brian ichter. 2023. In- ner monologue: Embodied reasoning through plan- ning with language models. In Proceedings of The 6th Conference on Robot Learning, volume 205 of Proceedings of Machine Learning Research, pages 1769–1782. PMLR. Huggingface. 2023. Transformers agent. Website. https://huggingface.co/docs/transformers/ transformers_agents. Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api- bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244.
2309.00986#30
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
31
Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generaliza- tion through multitask finetuning. arXiv preprint arXiv:2211.01786. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334.
2309.00986#31
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
32
model connected with massive apis. arXiv preprint arXiv:2305.15334. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models. arXiv preprint arXiv:2304.08354. QwenLM. 2023. Qwen-7b.
2309.00986#32
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
33
QwenLM. 2023. Qwen-7b. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susan- nah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in hugging face. arXiv preprint arXiv:2303.17580. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. 2023. Toolalpaca: Gener- alized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301.
2309.00986#33
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
34
THUDM. 2023. Chatglm. https://github.com/ THUDM/ChatGLM-6B. Junfeng Tian, Hehong Chen, Guohai Xu, Ming Yan, Xing Gao, Jianhai Zhang, Chenliang Li, Jiayi Liu, Wenshen Xu, Haiyang Xu, Qi Qian, Wei Wang, Qing- hao Ye, Jiejing Zhang, Ji Zhang, Fei Huang, and Jingren Zhou. 2023. Chatplug: Open-domain gen- erative dialogue system with internet-augmented in- struction tuning for digital human. arXiv preprint arXiv:2304.07849. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
2309.00986#34
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
35
Chenfei Wu, Shengming Yin, Weizhen Qi, Xi- aodong Wang, Zecheng Tang, and Nan Duan. 2023. Visual chatgpt: Talking, drawing and edit- ing with visual foundation models. arXiv preprint arXiv:2303.04671. Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. 2023. Gpt4tools: Teaching large language model to use tools via self-instruction. arXiv preprint arXiv:2305.18752. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. # A Library # A.1 Tool List
2309.00986#35
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
36
# A Library # A.1 Tool List API Name (language) Text-to-Image(en) Text-to-Image(zh) Text-to-Video(en) Text-to-Audio(en) Text-to-Audio(zh) Image-Chat(en) Translation-zh2en Translation-en2zh Universal-IE(zh) Text-to-Geographic(zh) NER(zh) API-Retrieval ModelScope-Retrieval Description Converts text to an image. Converts text to an image. Converts text to a video. Converts text to audio. Converts text to audio. Image chat. Translates Chinese text to English. Translates English text to Chinese. Extracts structured information. Extracts geographic information. Recognizes named entities in text. Retrieves relevant APIs Retrieves modelscope docs. Type Model API Model API Model API Model API Model API Model API Model API Model API Model API Model API Model API Common API Common API Table 2: The statistics of default tool list. Supported input languages for the APIs are listed in parentheses. # A.2 CustomTool
2309.00986#36
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
37
Table 2: The statistics of default tool list. Supported input languages for the APIs are listed in parentheses. # A.2 CustomTool User can customize their own tools by inheriting a base tool and defining the tool names, descriptions, and parameters according to a pre-defined schema. Moreover, you can implement _local_call() or _re- mote_call() depending on your specific require- ments. To illustrate, below is an example of a custom tool: class CustomTool ( Tool ): description = ’ xxx ’ name = ’ xxx ’ parameters : list = [{ ’ name ’: ’ xxx ’, ’ description ’: ’ xxx ’, ’ required ’: True }] def _local_call (): ... def _remote_call (): ... # B Experiment Setup # B.1 Evaluation Benchmark To assess the generalization of the trained agent, we include 10 in-domain APIs that appear in the training set of ModelScope-Agent and 10 real un- seen APIs10. We also account for the multi-turn ability of the agent by annotating several multi-turn scenarios in our evaluation benchmark. Our test instances were annotated by asking the human ex- perts to write diverse instructions first. Then the human experts were ask to write the JSON API request and answer the instructions properly after obtaining the API calling results. Our final testing
2309.00986#37
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
38
10In progress, we will include more APIs in the future. dataset consisted of 360 conversations with 2059 text snippets as the references to be compared with the agent prediction, which comprise 798 API re- qusts and 1261 plain text answers according to the previous calling results. # B.2 Evaluation Results Model ChatGPT (2-shot)∗ LLaMA ChatPLUG11 MSAgent-Qwen12 ROUGE-L Action EM Argument F1 36.70 39.16 46.45 51.35 34.82 58.60 68.29 87.23 25.51 44.98 55.12 68.09 Table 3: Automatic evaluation results. ∗ represents that we do not fine-tune ChatGPT but use in-context learning with 2 demonstrations.
2309.00986#38
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
39
Table 3: Automatic evaluation results. ∗ represents that we do not fine-tune ChatGPT but use in-context learning with 2 demonstrations. We compare the models trained in our proposed ModelScopeGPT. The automaction evaluation re- sults are shown in Table 3. Based on the findings obtained from our experimentation, it is evident that ChatGPT with in-context learning yielded infe- rior results as compared to other models that were subjected to finetuning. Furthermore, LLaMA un- derperformed when compared to other finetuned models. Our error study revealed that the lower performance of ChatGPT and LLaMA could be at- tributed to a large proportion of Chinese test cases in our test set. The models (ChatPLUG, Qwen) that performed better were those that predominantly fo- cused on Chinese data. Our investigation revealed that ChatGPT and LLaMA exhibited limitations in user intent recognition, which ultimately led to their suboptimal performance on Action EM. Among the models examined, Qwen displayed the most favorable performance, which could be at- tributed to the superior performance of its basic model. # B.3 Weighted LM
2309.00986#39
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
40
# B.3 Weighted LM We give an example of the training strategy Weighted LM. As show in Figure 4, tokens with different colors have different loss weights. For the user input prompt, we set the loss weight to 0, so that the model does not calculate the loss for the prompt. For the API-Agnostic text of the assistant, we keep the loss weight as 1. Finally, for the im- portant text of the API calling, such as API name, parameters, URL, etc., we set the loss weight to 2, which can improve the generation accuracy of API calling. Assistant: Wearable devices for immersive virtual reality experience. Assistant : <startofthink> { “apiname” : “modelscope_speech-generation” , f <audio id=audio controls= preload=none> <source id=wav src=! hifigan tts zh-cn 16k", "parameters": {"text": "Wearable devices for immersive virtual reality experience.", "gender": 'woman"}}<endofthink> .wav> </audio> Loss weight 0.0 “url” : *http://33.57.174,141:5000/damo/speech sambert- ae fe2. f 20 Figure 4: Example of training strategy for weighted LM. Different colored tokens have different loss weights.
2309.00986#40
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]
2309.00986
41
Figure 4: Example of training strategy for weighted LM. Different colored tokens have different loss weights. Generate a video: two cats are playing © | will call ModelScope's video generation model api to generate a video clear | |image| | ora mea ERT Mate BAA © Bal ModelScope RAR UMACEML, ATURE STEM BA FA ModelScope APR FMLGS CHR, AGAR ASAE BEIEtUES: Video of two cats playing HAAME RRR LLITP..... clear | |image| | Eid convent Figure 5: Single-step tool-use instructions, text-to-video cases. We have captured a few frames of the video to display. Testing the model using the same semantic instruction in both English (left) and Chinese (right).
2309.00986#41
ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models
Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent framework that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning over tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent library\footnote{https://github.com/modelscope/modelscope-agent} and online demo\footnote{https://modelscope.cn/studios/damo/ModelScopeGPT/summary} are now publicly available.
http://arxiv.org/pdf/2309.00986
Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, Hongzhu Shi, Ji Zhang, Fei Huang, Jingren Zhou
cs.CL
null
null
cs.CL
20230902
20230902
[ { "id": "2304.07849" }, { "id": "2302.13971" }, { "id": "2306.05685" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2302.04761" }, { "id": "2304.08244" }, { "id": "2211.01786" }, { "id": "2204.01691" }, { "id": "2304.08354" }, { "id": "2203.15556" }, { "id": "2303.04671" }, { "id": "2305.18752" }, { "id": "2306.05301" }, { "id": "2112.11446" } ]