id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2310.00754#21 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | s backbone? Datasets. MSCOCO (Lin et al., 2014) is a comprehensive dataset used for image recognition, segmentation, and captioning. It comprises over 300,000 images spanning more than 80 object categories, each with detailed annotations. Following (Li et al., 2023d; Liu et al., 2023a), we selected 5,000 unique images from the COCO 2014 training dataset to evaluate performance. To train the hallucination revisor, we randomly selected 5000 image-text pairs from LLaVA-150k (Liu et al., 2023c), ensuring that these images were different from the ones used in testing. Evaluation Metric. Caption Hallucination Assessment with Image Relevance (CHAIR) (Rohrbach et al., 2018) is a widely-used metric for evaluating object hallucination in image captioning tasks. CHAIR assesses the quality of image captions by comparing them to the ground truth objects present in the corresponding images. It calculates the proportion of objects mentioned in the caption that are not actually present in the image. There are two common variants of CHAIR: CHAIRI and CHAIRS. Both variants evaluate the degree of object hallucination, but at different levels: the object instance level and the sentence level, respectively. The two variants are formulated as follows: CHAIRI = |{hallucinated objects}| |{all mentioned objects}| , CHAIRS = |{captions with hallucinated objects}| |{all captions}| . (4) | 2310.00754#20 | 2310.00754#22 | 2310.00754 | [
"2308.14972"
] |
2310.00754#22 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 6 Preprint Baselines. The comparison methods include: Original, which directly use the generated descrip- tions from LVLMs; Teacher (Saha et al., 2023), which leverages blip2 (Li et al., 2023b) to generate short image descriptions and employs them as contextual guidance for generating long-form descrip- tions; Chain-of-Thought (CoT) (Wei et al., 2022), which involves the model initially listing objects and subsequently describing the image; Greedy-Decoding, a method that abstains from using a sam- pling strategy and aims to make the model output the most certain tokens; GPT-Ensemble, which initially employs GPT-3.5 to aggregate the commonly generated descriptions from multiple LVLMs, excluding the one under evaluation. Subsequently, GPT-3.5 utilizes these summarized common de- scriptions as guidance to rewrite the originally generated description from the evaluated model; GPT-Teacher, where GPT-3.5 is tasked with rewriting the original long-form description based on the blip2 generated short descriptions. Detailed descriptions about baselines are in Appendix A.4. Evaluated LVLMs. We performed experiments utilizing six of the most recent LVLMs, with their corresponding language models specified in parentheses: MiniGPT-4 (Vicuna 13B) (Zhu et al., 2023), LLaVa (LLaMA 13B) (Liu et al., 2023d), MMGPT (LLaMA 7B) (Gong et al., 2023), LLaMA-Adapter (LLaMA 7B) (Zhang et al., 2023b), mPLUG-Owl (LLaMA 7B) (Ye et al., 2023), and InstructBLIP (Vicuna 7B) (Dai et al., 2023). Hyperparameter Settings. Unless specified, all experiments in the paper are using MiniGPT-4 as the backbone of the revisor, along with the training parameter settings provided in Appendix A.2. All hyperparameters are selected via cross-validation. 4.1 EVALUATION STRATEGIES AND RESULTS Automated Object Hallucination Evaluation. | 2310.00754#21 | 2310.00754#23 | 2310.00754 | [
"2308.14972"
] |
2310.00754#23 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | We follow the guidelines presented in (Rohrbach et al., 2018) to perform an automated calculation of CHAIR metrics for the MSCOCO dataset, where 80 objects are involved in this automated evaluation process. In addition, we extend our evaluation to include other widely used metrics such as BLEU and CLIP score, which are commonly adopted in assessing the quality of image captioning. Detailed descriptions and results for these additional metrics can be found in Appendix C.1. Human and GPT Evaluations. Although automated evaluation strategies are efficient, they cannot encompass all objects present in the evaluated images. To overcome this limitation, we conducted a comprehensive human evaluation involving several native speakers. Please refer to Appendix A.5 for the evaluation interface. In this human evaluation, participants are assigned the task of annotat- ing hallucinatory objects and we rank different methods based on human feedback. In addition to human evaluation, inspired from (Zheng et al., 2023), we also prompt GPT-3.5 to compare differ- ent descriptions. In this GPT evaluation, we provide the annotated information, including detection boxes and captions, and anticipate that GPT-3.5 can provide an ranking for the descriptions from various methods. For GPT evaluation, we use the prompts referenced in Table 9 in the Appendix. Results. In Table 1 and Table 2, we report the results of automated evaluations and human and GPT evaluations under different LVLMs, respectively. Here, taking cost into account, we only compare LURE with the four strongest methods in human and GPT evaluations. Although Teacher, CoT, and GPT-Teacher can improve the performance compared to the original descriptions in most cases, LURE significantly enhances performance over these strong baselines, which effectively reduces object hallucination in generated descriptions. One potential reason for this is that all of these baselines experience error propagation to some extent. For instance, CoTâ s linear guidance can lead to errors if the object listing step is incorrect. In contrast, LURE directly corrects hallucinatory descriptions using guidance from potential factors that can trigger hallucinations. 4.2 ANALYSIS OF LURE Are the Performance Gains of LURE from Us- ing Constructed Hallucination Datasets? To ver- ify that the performance gains of our method are not from using additional data to train the revisor, we fine-tuned the original LVLMs with the additional dataset. | 2310.00754#22 | 2310.00754#24 | 2310.00754 | [
"2308.14972"
] |
2310.00754#24 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | The results on MiniGPT-4 are shown in Ta- ble 3, where â Originalâ represents the descriptions Table 3: Compared LURE to fine-tuning method using the training data of revisor. Model CHAIRS â CHAIRI â Original FT (addâ l data) 26.8 31.0 7.3 7.2 LURE (Ours) 19.7 4.9 7 # Preprint Table 1: Automated hallucination evaluation is performed under six LVLMs using CHAIRS (CS) and CHAIRI (CI ), where smaller values indicate less object hallucination. For additional metrics, please refer to Appendix C.1. | 2310.00754#23 | 2310.00754#25 | 2310.00754 | [
"2308.14972"
] |
2310.00754#25 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | MiniGPT-4 CS â CI â CS â CI â CS â CI â CS â LLaVa MMGPT LLaMA-Adapter mPLUG-Owl InstructBLIP CS â CI â CS â CI â CI â Original Teacher CoT Greedy-Decoding GPT-Ensemble GPT-Teacher 26.8 24.0 31.6 25.1 41.0 25.3 7.3 5.7 9.4 6.6 10.6 7.6 54.0 49.9 47.6 50.9 43.0 38.0 11.3 9.3 9.0 10.0 10.7 7.8 56.6 53.4 48.8 50.6 51.0 26.7 11.0 7.5 17.5 8.4 11.1 9.3 58.8 40.8 43.3 55.9 47.1 49.0 13.7 9.4 9.4 13.7 13.0 12.4 71.2 62.4 56.9 55.1 52.0 22.0 16.5 13.0 13.4 12.8 15.2 9.0 40.0 36.4 35.7 35.5 51.0 32.0 8.2 7.5 7.8 7.8 13.0 7.8 LURE (ours) 19.7 4.9 27.1 6.4 22.2 5.6 35.3 9.1 18.8 5.4 21.0 5.1 Table 2: We conducted evaluations for description ranking, comparing the four strongest baselines in both human (â Hâ ) and GPT (â Gâ ) evaluations. | 2310.00754#24 | 2310.00754#26 | 2310.00754 | [
"2308.14972"
] |
2310.00754#26 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Metrics represent the average rankings within the top 1-5 positions, with lower rankings indicating less hallucination. MiniGPT-4 G â H â G â H â G â H â G â LLaVa MMGPT LLaMA-Adapter mPLUG-Owl H â G â H â InstructBLIP G â H â Original Teacher CoT GPT-Teacher 3.97 3.36 2.44 3.56 3.10 3.83 2.83 3.28 4.55 4.62 3.66 3.25 4.79 3.30 3.07 3.09 3.20 3.00 3.05 2.52 4.38 4.07 2.63 2.45 2.96 2.16 2.90 2.68 4.45 3.13 2.10 3.24 4.25 3.25 3.75 2.50 3.98 3.66 3.13 2.44 4.29 3.34 2.78 3.12 4.77 3.53 2.21 2.56 LURE (ours) 1.67 1.96 1.65 1.83 1.61 1.58 1.90 2.08 1.25 1.79 1.47 1.93 of MiniGPT-4. According to Table 3, LURE outperforms the fine-tuned LVLMs, which indicates that our method indeed reduces object hallucination by post-hoc rectifying potential hallucinatory descriptions rather than using additional data. Ablation Study â Do the Hallucination Factors Contribute Performance Gains? To demonstrate the impact of considering co-occurrence, uncer- tainty, and object position in reducing hallucination, we conducted ablation experiments and report the results in Table 4, where â | 2310.00754#25 | 2310.00754#27 | 2310.00754 | [
"2308.14972"
] |
2310.00754#27 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Originalâ represents the descriptions of MiniGPT-4. In the ablation experi- ments, we trained and deployed the revisor without each of the three factors, one at a time. The results show that all three factors contribute to training a strong hallucination revisor to reduce object hallucination. Furthermore, we have also conducted an analysis of the changes in these three factors before and after applying the revisor, as presented in Appendix C.2. This analysis demonstrates that LURE can effectively reduce instances of hallucina- tion caused by these factors. | 2310.00754#26 | 2310.00754#28 | 2310.00754 | [
"2308.14972"
] |
2310.00754#28 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | CHAIRS â CHAIRI â Model Original w/o Co-occurrence w/o Uncertainty w/o Position 26.8 22.6 21.2 22.3 7.3 4.9 5.4 5.8 LURE (Ours) 19.7 4.9 Robustness Analysis of the Hallucination Revi- sor. We further analyze the robustness of the revi- sor with respect to different backbones. Specifically, we trained the revisor on the same dataset using different backbones: | 2310.00754#27 | 2310.00754#29 | 2310.00754 | [
"2308.14972"
] |
2310.00754#29 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | MiniGPT-4, LLaMA-adapter, and mPLUG-Owl. The results are reported in Table 5, where â Originalâ represents the descriptions of MiniGPT-4. We can observe that despite the vary- ing performance of each backbone, LURE consis- tently improve the performance compared to the original description, which further indicate the effectiveness of LURE. Additionally, we analyze the results of LURE with respect to various uncer- tainty thresholds in Appendix C.3. The findings demonstrate that LURE exhibits strong performance across a wide range of uncertainty thresholds. | 2310.00754#28 | 2310.00754#30 | 2310.00754 | [
"2308.14972"
] |
2310.00754#30 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Backbone CHAIRS â CHAIRI â Original 26.8 7.3 MiniGPT-4 LLaMA-adapter mPLUG-Owl 19.7 21.3 22.1 4.9 5.2 5.4 Case Analysis. We select several strong baselines and presented a case with rectified descriptions in Figure 3. Compared with other approaches, LURE excels in providing a more accurate image 8 Preprint fox) Original fra â | 2310.00754#29 | 2310.00754#31 | 2310.00754 | [
"2308.14972"
] |
2310.00754#31 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Teacher cpr. â Teacher Figure 3: A case study comparing the levels of hallucination among various baselines. description. In the case, LURE accurately depicts the primary elements (e.g., sandwich, chair, plate) while avoiding hallucinatory objects like the fork and handbag. Although other baselines partially reduce hallucination, they still exhibit object hallucinations in their descriptions. Additionally, we also mitigate logical errors to some extent, including object orientation and actions. Further case analyses can be found in Appendices D.3 and D.4. # 5 RELATED WORK Vision-Language Models. Vision-language pre-trained models, as exemplified by (Li et al., 2021; Zeng et al., 2021), demonstrate substantial capabilities in modeling interactions between visual and textual information, especially when fine-tuned for specific tasks. Recently, autoregressive large- scale language models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023; Zhang et al., 2022b; Chiang et al., 2023; Taori et al., 2023) have ushered in a new era of vision- language models. These models, known as LVLMs, integrate LLMs with visual modality and show- case impressive visual understanding through end-to-end training techniques that directly decode vi- sual and text tokens in a unified manner (Liu et al., 2023d; Zhu et al., 2023; Ye et al., 2023; Li et al., 2023a). However, similar to VLMs, LVLMs also face the challenge of object hallucination (Wang et al., 2023a; Rohrbach et al., 2018). This form of object hallucination is more pronounced and widespread in the long-form descriptions produced by LVLMs compared to the shorter descriptions generated by VLMs (Zhang et al., 2023a). Hallucination in VLMs and LVLMs. In VLMs, hallucination typically refers to scenarios where the generated descriptions contain information that does not exist in the visual modality (Rohrbach et al., 2018; Biten et al., 2022; Wang et al., 2023a). | 2310.00754#30 | 2310.00754#32 | 2310.00754 | [
"2308.14972"
] |
2310.00754#32 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Addressing object hallucination in VLMs is primarily achieved through techniques such as fine-grained contrastive learning (Zeng et al., 2021), ROI feature fusion (Biten et al., 2022), and eliminating co-occurrence patterns through data aug- mentation (Kim et al., 2023). However, the training paradigms between traditional VLMs and re- cent LVLMs differ, and the new autoregressive training paradigm in LVLMs makes it challenging to directly apply hallucination mitigation methods used in VLMs to LVLMs. Recent research has begun to address the issue of object hallucination in LVLMs, including hallucination evaluation and detection (Wang et al., 2023a; Liu et al., 2023a; Li et al., 2023d), as well as the construction of higher-quality datasets for fine-tuning (Gunjal et al., 2023; Li et al., 2023c; Liu et al., 2023a;d). Nevertheless, acquiring a substantial number of high-quality examples can be time-consuming and labor-intensive. Instead, grounded in statistical analysis of hallucination, we propose a conceptually different approach, LURE, to post-hoc rectify object hallucination. We have already demonstrated its effectiveness in reducing hallucination and its compatibility with various LVLMs. # 6 CONCLUSION In this paper, our objective is to address the challenge of object hallucination in LVLMs. We in- troduce a lightweight post-hoc method, named LVLM Hallucination Revisor (LURE), designed to rectify object hallucination in the generated descriptions produced by LVLMs. LURE is grounded in three key factors known to contribute to object hallucination: co-occurrence, uncertainty, and object position. These factors have been demonstrated to induce hallucination both empirically and theo- retically. Our experiments, conducted on six open-source LVLMs, demonstrate the effectiveness of LURE in mitigating object hallucination in LVLM-generated descriptions. | 2310.00754#31 | 2310.00754#33 | 2310.00754 | [
"2308.14972"
] |
2310.00754#33 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 9 Preprint # ACKNOWLEDGEMENT This work was partially supported by Juniper Networks. # REFERENCES Ali Furkan Biten, Llu´ıs G´omez, and Dimosthenis Karatzas. Let there be a clock on the beach: In Proceedings of the IEEE/CVF Winter Reducing object hallucination in image captioning. Conference on Applications of Computer Vision, pp. 1381â 1390, 2022. Paul Brie, Nicolas Burny, Arthur Slu¨yters, and Jean Vanderdonckt. | 2310.00754#32 | 2310.00754#34 | 2310.00754 | [
"2308.14972"
] |
2310.00754#34 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Evaluating a large language model on searching for gui layouts. Proceedings of the ACM on Human-Computer Interaction, 7 (EICS):1â 37, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â | 2310.00754#33 | 2310.00754#35 | 2310.00754 | [
"2308.14972"
] |
2310.00754#35 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 1901, 2020. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. Advances in neural information processing systems, 32, 2019. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3558â 3568, 2021. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. | 2310.00754#34 | 2310.00754#36 | 2310.00754 | [
"2308.14972"
] |
2310.00754#36 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. | 2310.00754#35 | 2310.00754#37 | 2310.00754 | [
"2308.14972"
] |
2310.00754#37 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning, 2023. Jie Ding, Vahid Tarokh, and Yuhong Yang. | 2310.00754#36 | 2310.00754#38 | 2310.00754 | [
"2308.14972"
] |
2310.00754#38 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Bridging aic and bic: a new criterion for autoregression. IEEE Transactions on Information Theory, 64(6):4024â 4043, 2017. Markus Freitag and Yaser Al-Onaizan. Beam search strategies for neural machine translation. arXiv preprint arXiv:1702.01806, 2017. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023. | 2310.00754#37 | 2310.00754#39 | 2310.00754 | [
"2308.14972"
] |
2310.00754#39 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Anisha Gunjal, Jihan Yin, and Erhan Bas. Detecting and preventing hallucinations in large vision language models. arXiv preprint arXiv:2308.06394, 2023. Edward James Hannan, AJ McDougall, and Don Stephen Poskitt. Recursive estimation of autore- gressions. Journal of the Royal Statistical Society: Series B (Methodological), 51(2):217â 233, 1989. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. | 2310.00754#38 | 2310.00754#40 | 2310.00754 | [
"2308.14972"
] |
2310.00754#40 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019. Mingzhe Hu, Shaoyan Pan, Yuheng Li, and Xiaofeng Yang. Advancing medical imaging with language models: A journey from n-grams to chatgpt. arXiv preprint arXiv:2304.04920, 2023. 10 Preprint Ching-Kang Ing. Accumulated prediction errors, information criteria and optimal forecasting for autoregressive time series. The Annals of Statistics, 35(3):1238â 1277, 2007. | 2310.00754#39 | 2310.00754#41 | 2310.00754 | [
"2308.14972"
] |
2310.00754#41 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Jae Myung Kim, A Koepke, Cordelia Schmid, and Zeynep Akata. Exposing and mitigating spurious correlations for cross-modal retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2584â 2594, 2023. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023a. | 2310.00754#40 | 2310.00754#42 | 2310.00754 | [
"2308.14972"
] |
2310.00754#42 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694â 9705, 2021. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- arXiv preprint image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023b. | 2310.00754#41 | 2310.00754#43 | 2310.00754 | [
"2308.14972"
] |
2310.00754#43 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, et al. M3it: A large-scale dataset towards multi-modal multilingual instruc- tion tuning. arXiv preprint arXiv:2306.04387, 2023c. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023d. Chin-Yew Lin. | 2310.00754#42 | 2310.00754#44 | 2310.00754 | [
"2308.14972"
] |
2310.00754#44 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74â 81, 2004. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr In Computer Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. Visionâ ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740â 755. Springer, 2014. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023a. Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, and Yasuhisa Hasegawa. arXiv preprint Llm-based human-robot collaboration framework for manipulation tasks. arXiv:2308.14972, 2023b. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023c. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023d. | 2310.00754#43 | 2310.00754#45 | 2310.00754 | [
"2308.14972"
] |
2310.00754#45 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. Jinjie Mai, Jun Chen, Bing Li, Guocheng Qian, Mohamed Elhoseiny, and Bernard Ghanem. Llm as a robotic brain: Unifying egocentric memory and control. arXiv preprint arXiv:2304.09349, 2023. | 2310.00754#44 | 2310.00754#46 | 2310.00754 | [
"2308.14972"
] |
2310.00754#46 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Gary M Olson, James D Herbsleb, and Henry H Reuter. Characterizing the sequential structure of interactive behaviors through statistical and grammatical techniques. Humanâ Computer Interac- tion, 9(3-4):427â 472, 1994. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24, 2011. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311â 318, 2002. 11 Preprint Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. | 2310.00754#45 | 2310.00754#47 | 2310.00754 | [
"2308.14972"
] |
2310.00754#47 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Learning transferable visual models from natural language supervision, 2021. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4035â 4045, 2018. Swarnadeep Saha, Peter Hase, and Mohit Bansal. Can language models teach weaker agents? teacher explanations improve students via theory of mind. arXiv preprint arXiv:2306.09299, 2023. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. | 2310.00754#46 | 2310.00754#48 | 2310.00754 | [
"2308.14972"
] |
2310.00754#48 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096â 1103, 2008. Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, et al. Evaluation and analysis of hallucination in large vision- language models. arXiv preprint arXiv:2308.15126, 2023a. Interac- tive computer-aided diagnosis on medical image using large language models. arXiv preprint arXiv:2302.07257, 2023b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. Yan Zeng, Xinsong Zhang, and Hang Li. Multi-grained vision language pre-training: | 2310.00754#47 | 2310.00754#49 | 2310.00754 | [
"2308.14972"
] |
2310.00754#49 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Aligning texts with visual concepts. arXiv preprint arXiv:2111.08276, 2021. Linjun Zhang, Zhun Deng, Kenji Kawaguchi, and James Zou. When and how mixup improves calibration. In International Conference on Machine Learning, pp. 26135â 26160. PMLR, 2022a. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534, 2023a. Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. | 2310.00754#48 | 2310.00754#50 | 2310.00754 | [
"2308.14972"
] |
2310.00754#50 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Llama-adapter: Efficient fine-tuning of language models with zero-init atten- tion. arXiv preprint arXiv:2303.16199, 2023b. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christo- pher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022b. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. | 2310.00754#49 | 2310.00754#51 | 2310.00754 | [
"2308.14972"
] |
2310.00754#51 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Bertscore: Evaluat- ing text generation with bert. arXiv preprint arXiv:1904.09675, 2019. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685, 2023. | 2310.00754#50 | 2310.00754#52 | 2310.00754 | [
"2308.14972"
] |
2310.00754#52 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 12 Preprint Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. A EXPERIMENTAL DETAILS A.1 EXPERIMENTAL SETTING FOR THE HALLUCINATION ANALYSIS Experimental Setting for Co-occurrence Analysis. The objects in this experiment are based on the 80 object labels annotated in (Rohrbach et al., 2018) from the COCO dataset, and the image descriptions are generated by MiniGPT-4 based on inference results from 5000 images in the COCO 2014 train dataset. Experimental Setting for the Uncertainty Analysis. Because uncertainty and position analysis are relatively independent from co-occurrence, in order to avoid conducting statistical analysis on the training set distribution, the statistical data for uncertainty analysis is derived from MiniGPT-4â s descriptions of 200 images from the COCO 2014 test dataset. The computation of uncertainty is performed using â log p(zi|s<i, x). Experimental Setting for the Analysis of Position of Hallucinated Objects. Similar to the uncer- tainty analysis, we used the manually annotated descriptions of MiniGPT-4 for 200 images from the COCO 2014 test dataset, due to the need for precise positioning. A.2 TRAINING SETTINGS FOR REVISOR The overall revisor training setting is similar to MiniGPT-4. Here, we only need one A100 80G GPU for training, which takes approximately 10 minutes. We present hyperparameter settings of the LURE during the training phase, as shown in Table 6. Table 6: Training hyperparameters. Hyperparameters Training steps Warmup steps Max length Batch size of multi-modal instruction data Optimizer Learning rate Learning rate decay AdamW ϵ AdamW β Weight decay A.3 PROMPTS FOR TRAINING DATASET We leverage the in-context few-shot learning capability of GPT-3.5 to generate hallucinatory data automatically for revising. Initially, we prompt GPT-3.5 to provide a list of objects that are highly likely to co-occur with the objects mentioned in the given description. | 2310.00754#51 | 2310.00754#53 | 2310.00754 | [
"2308.14972"
] |
2310.00754#53 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Next, we use LVLMs (such as MiniGPT-4) to generate descriptions for the training set of 5000 images. During this process, we will save nouns with â log p(zi|s<i, x) greater than the uncertain threshold γ in the decoding process to the list of uncertain objects corresponding to each image. Subsequently, we direct the model to take the original description and incorporate a randomly chosen word from the â co-occur objectsâ list, as well as another randomly chosen word from the â uncertain objectsâ list, into it. Detailed prompts are listed in Table 7 and a few examples are presented in Table 12. 13 # Preprint Table 7: The prompt for the GPT-3.5 API to generate the required hallucination dataset. â Instruction 1â is used to ask ChatGPT to provide a list of co-occurring objects based on the description, while â Instruction 2â is used to integrate the objects obtained from the co-occurring object list and the objects from the list of uncertain objects into the given description. Instruction 1: List three other objects that you think are most likely to appear with the objects in the scene described below: {description} Output in strict accordance with the following format: Object one Object two Object three Instruction 2: Input caption: {description} co objects list: {co objects list} uncertain objets list: {uncertain objets list} Select one object from â co objects listâ and â uncertain objects listâ respectively and add it to â Input captionâ to get â Output captionâ . (Try not to change the format) Output caption: A.4 DETAILS ABOUT BASELINE In this section, we will provide a detailed explanation of the settings used for the baseline in Table 1, including some parameter settings and prompt configurations. The detailed prompt for baselines can be seen in Table 8. | 2310.00754#52 | 2310.00754#54 | 2310.00754 | [
"2308.14972"
] |
2310.00754#54 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | â ¢ Teacher: The â Teacherâ approach involves generating short descriptions for the images via blip2 (Li et al., 2023b) and using them as context to guide the model in generating descriptions. By providing these descriptions as additional information, the model can benefit from the guidance and produce more accurate or relevant descriptions. â ¢ CoT: The â CoTâ method asks the model to first list the objects it identifies in the image and then describe the image based on those objects. It draws inspiration from the concept of chain of thought (Wei et al., 2022) and aims to guide the model in generating accurate descriptions by focusing on object recognition. | 2310.00754#53 | 2310.00754#55 | 2310.00754 | [
"2308.14972"
] |
2310.00754#55 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | â ¢ Greedy-Decoding: The difference between the â Greedy-Decodingâ strategy and the â Originalâ strategy is that in the â Greedy-Decodingâ strategy, the model uses greedy decoding instead of sampling during the generation of image descriptions to produce the most deterministic output. This approach is used to explore the potential connection between the generation of illusions and the use of sampling. â ¢ GPT-Ensemble: In â GPT-Ensemble,â we utilize GPT-3.5 to summarize the common elements in the descriptions generated by multiple LVLMs, excluding the one being evaluated. Subsequently, we employ GPT-3.5 to rewrite the description of the evaluated LVLM, using the identified com- mon elements from the descriptions of the other models to correct any dissimilar parts in the evaluated modelâ s description. â ¢ GPT-Teacher: â GPT-Teacherâ represents the process of providing the GPT-3.5 API with con- textual references and descriptions from the modelâ s output, allowing it to revise the inaccurate description generated by the model into a more accurate version based on the contextual informa- tion. | 2310.00754#54 | 2310.00754#56 | 2310.00754 | [
"2308.14972"
] |
2310.00754#56 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 14 Preprint Table 8: Prompts for baselines. Teacher: Reference caption: {blip2 caption} Please refer to reference caption and describe this picture: CoT: Human: Please list the main objects in the picture and strictly follow the following format: {object1, object2, object3......} AI: {objects list} Human: Describe this image AI: {description} GPT-Ensemble: Reference captions 1:{description of model 1} Reference captions 2:{description of model 2} Reference captions 3:{description of model 3} Reference captions 4:{description of model 4} Reference captions 5:{description of model 5} Original Description:{description} Synthesizing the commonalities of Reference captions 1-5, and then removing the parts in the Original Description that do not align with the commonalities, while preserving the original format. | 2310.00754#55 | 2310.00754#57 | 2310.00754 | [
"2308.14972"
] |
2310.00754#57 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Answer: GPT-Teacher: Reference caption: {blip2 caption} Original description: {description} Rewrite the original description to align it with the reference caption, delete some objects that you think are hallucinations, and keep the original format. Answer: A.5 DETAILS ABOUT MANUAL ANNOTATION EVALUATIONS The manual evaluation annotation interface provides a user-friendly interface for performing manual annotations and capturing evaluation feedback. The interface is hosted on the Amazon Web Services (AWS) platform, which offers scalability, reliability, and security for handling annotation tasks. As shown in Figure 4, we annotated all objects and hallucinated objects in the descriptions based on the images. We then provided a binary label (0/1) to indicate whether each description contained hallucinations. Based on the fine-grained annotation results, similar to GPT evaluation, we sorted the results from different baselines. | 2310.00754#56 | 2310.00754#58 | 2310.00754 | [
"2308.14972"
] |
2310.00754#58 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 15 # Preprint instructions: (1) the object list mentioned in the description ) the list of hallucinatory objects mentioned in the description (limited to objects only) (3) whether hallucination exists in this description (1 if yes, 0 otherwise) Format Requirements: (1) Please fill the â object lst" in the following format (â obj* refers to all the objects mentioned in the description.): bjt, obj2, obj3 ) Please fill in the â hallucinatory objects' in the following format (â hobj* refers to the illusory objects that exist in the description but are not present in the image,): hhobj1, hobj2, hobjs nny Model description:There are four children on a bench wearing baseball uniforms. Two of them are boys and the other two are girls. All of them are smiling and looking at the camera, [The bench is located in a park or a playground, as there are trees visible in the background. There is a building on the right side of the image. The children are wearing helmets, and fone of them is holding a baseball bat. bbject list hallucinatory objects H/o Figure 4: Human evaluation annotation interface. # B DETAILED PROOF B.1 PROOF OF THEOREM 2.1 Let us denote N = |D(1)| = |D(2)|. For the detection rule of the first object, we have ga) __1 on Oplsei k= pay Ss Vik Ok(S<isZ). (s<i,@.Yin JED AS 4 (S8<is®) | Yise ~ Nin og 2), we write Vik Ok(S<is®) = py + Ei,e- Now, suppose among all samples, a fraction Ï 0 â (0, 1) of samples have both y1 and y2 are equal to 1. We can then write (1) a(1) 1 po N 1 po: N (By, By") = (poy + WV Ss â ¬i,1; PoHls + WN Ss â ¬i,2)- i=l i=l Use Φ(·) to denote the cumulative distribution function of a standard normal distribution. Then for the prediction function Ë f2 = â ¨Ï 1(s<i, x), Ë | 2310.00754#57 | 2310.00754#59 | 2310.00754 | [
"2308.14972"
] |
2310.00754#59 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | β(1) , l,, 5 . Brr( fg?) = 5P((61(s<i.2), Bt?) + (b2(s<i.2), 83?) <0] y=1) 1 5 ; . + 5P((di(s<i.2),H,â ) + (b2(s<i.), 8) > 0] y= -1) o_Wicth) + (Bf VllPill? + [12\I? a(- olleill? + poll 43 |? * * : d Veallot 2 + 08g? + et + ea! ) > ) + o0(1). | 2310.00754#58 | 2310.00754#60 | 2310.00754 | [
"2308.14972"
] |
2310.00754#60 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Similarly, we have (2 plletll? + elles ll? Err(fgâ ) = ®(- s â #§$â Verlag? + elias? + Se + ae ) + o(1). wy 16 Preprint As Φ(â â Ï â ¥Âµâ 1 â ¥2+Ï 2⠥µâ 1 â ¥2+Ï â ¥Âµâ 2 â ¥2 2 â ¥2+ Ï Â·d N + Ï Â·d Ï 2⠥µâ N ) is monotonically increasing with Ï , we complete the proof. B.2 PROOF OF THEOREM 2.2 We first analyze the uncertainty score. In fact, we have be = 5m S> al(be(s<is), Be)) | (s<i,@,1) =Elo(($x(s<i,), Be))] + op (1) 1 E : z + op(1), exp + laa 2)! 0 where Z â ¼ N (0, 1) is the standard normal random variable. Therefore, Ë pk decreases when ⠥βkâ ¥ increases. Choosing samples with small Ë pk (i.e., â log(Ë pk)) correspond to larger sample sizes for the classes with larger ⠥µâ | 2310.00754#59 | 2310.00754#61 | 2310.00754 | [
"2308.14972"
] |
2310.00754#61 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Then we analyze the misclassification error. For Ë fk = sgn(â ¨Ï (s<i, x), Ë Î²kâ ©), we have Err( Ë fk) = P(sgn(â ¨Ï (s<i, x), Ë Î²kâ ©) ̸= y) = + 1 2 1 2 P(â ¨Ï (s<i, x), Ë Î²kâ © < 0 | y = 1) P(â ¨Ï (s<i, x), Ë Î²kâ © > 0 | y = â 1) As Ï k(s<i, x) | y â ¼ N (yk · µâ k, Id), we have P(â ¨Ï k(s<i, x), Ë Î²kâ © < 0 | y = 1) = P(â ¨Ï (s<i, x), Ë Î²kâ © > 0 | y = â 1) = Φ(â k, Ë Î²kâ © ⠨µâ â ¥ Ë Î²kâ ¥ # As Ë Î²k = µâ k + 1 nk # Ne i=1 ϵi := µâ k + 1â nk Z, we have (uh, Br) al + ws (Hi 2) Nel [lil + Auk. 2) + SIP As we assume ⠥µâ kâ ¥2 â ª d, we have (Hi Be) Mile g(a). | 2310.00754#60 | 2310.00754#62 | 2310.00754 | [
"2308.14972"
] |
2310.00754#62 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Pell s/Mugl2 + As a result, if the total sample size is fixed, choosing large nk for small ⠥µâ misclassification error small. kâ ¥ will make the average C ADDITIONAL ANALYSIS OF LURE C.1 MODEL PERFORMANCE ANALYSIS WITH ADDITIONAL METRICS In this section, we conduct additional analysis using commonly used metrics from vision-language models on the same dataset, and discuss the applicability of these methods to hallucination evalua- tion. C.1.1 DESCRIPTIONS OF ADDITIONAL METRICS BLEU BLEU (Bilingual Evaluation Understudy (Papineni et al., 2002)) is a metric used to evaluate the quality of machine-generated translations by comparing them to one or more reference transla- tions. The BLEU score is based on the idea of precision in n-grams, which are contiguous sequences of n words. It measures how well the generated translation matches the reference translations in terms of n-gram overlap. | 2310.00754#61 | 2310.00754#63 | 2310.00754 | [
"2308.14972"
] |
2310.00754#63 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 17 ). # large # Preprint BertScore BERTScore (Zhang et al., 2019) is a method for evaluating the quality of natural language generation or summarization systems. BERTScore measures the similarity between a reference text and a generated text by computing contextualized embeddings using BERT. ROUGE-L ROUGE-L (Recall-Oriented Understudy for Gisting Evaluation - Longest Common Subsequence (Lin, 2004)) is an evaluation metric commonly used in natural language processing and text summarization tasks. It is designed to measure the quality of a machine-generated sum- mary by comparing it to one or more reference summaries. CLIP CLIP (Contrastive Language-Image Pretraining (Radford et al., 2021)) score is a metric used to evaluate the performance of the vision-language model, which measures how well the model can correctly associate images with their corresponding captions or textual descriptions. # C.1.2 RESULTS In Table 10, we present the performance of different models and baselines on these metrics. Based on the experimental results, it is evident that LURE outperforms the other baselines in both text translation metrics and image-text matching metrics, with a notable improvement in the CLIP Score metric. This could be attributed to the higher sensitivity of the CLIP Score, as compared to text translation metrics like BLEU, in capturing object-level differences. These findings are consistent with the overall experimental results presented in Table 1, further confirming the effectiveness of LURE. However, we have also identified certain issues related to the BLEU metric for text transla- tion. The differences between baselines were not very pronounced, possibly because such metrics tend to emphasize the evaluation of text style rather than object-level distinctions. These metrics may not be well-suited for assessing hallucinations and long-form descriptions when compared to CHAIR. Table 9: The prompt for ChatGPT3.5 evaluation. Instruction: Suppose you are a hallucination annotator who judges the degree of hallucination based on objects, and you have the following image information. Reference captions:{five captions from COCO} Bounding box:{bounding boxes} Please just provide the ranks for the below descriptions without any explanation, where the caption ranks first with the most hallucinations. | 2310.00754#62 | 2310.00754#64 | 2310.00754 | [
"2308.14972"
] |
2310.00754#64 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | The output format: [caption x,...] Descriptions: caption 1: {description 1} caption 2: {description 2} caption 3: {description 3} caption 4: {description 4} caption 5: {description 5} Output: C.2 ADDITIONAL ANALYSIS ABOUT THE HULLUCINATION FACTORS To validate that our method reduces co-occurrence, uncertainty, and object positional bias that affect object hallucination, we further verify by evaluating the proportion of hallucinatory objects in high uncertainty, high co-occurrence, and sentence-ending positions. We compared the changes in vari- ous proportions of descriptions using MiniGPT-4 and LURE on the COCO 2014 test dataset. Here, we first describe how we calculate the object ratio under different factors: Ratio of Co-occurrence-Based Hallucinatory Objects. Similiar to uncertainty hallucination ra- tio, we obtain the Cratio by calculating ratio of the number of hallucination objects with high co- occurence score and the total number of objects with high co-occurence score: ye 1[CoScore, > CoScoremean] yu L[CoScore,, > CoScoremean| m=1 Cratio (6) | 2310.00754#63 | 2310.00754#65 | 2310.00754 | [
"2308.14972"
] |
2310.00754#65 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 18 Preprint Table 10: Performance of different models and baselines on general metrics. Models BLEU-1 BLEU-2 BLEU-3 BLEU-4 BERTS ROUGE-L CLIPS mPLUG-Owl Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 30.37 25.04 29.91 30.29 29.74 28.19 30.44 14.59 11.48 14.22 14.30 13.91 14.13 15.47 5.618 4.229 5.519 5.509 5.121 6.181 6.640 2.505 1.954 2.431 2.502 2.367 3.128 3.576 86.87 86.61 86.76 86.59 85.94 86.65 86.65 30.21 29.86 31.15 30.35 28.90 30.87 30.31 LLaVa Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 30.88 29.94 30.52 31.76 25.68 22.06 35.94 15.46 15.01 15.54 17.21 16.24 19.54 21.81 6.984 7.042 7.334 8.491 7.047 3.393 11.33 3.586 3.718 3.906 4.223 2.893 1.493 6.804 86.96 86.99 87.11 87.01 84.10 85.94 87.39 31.53 31.82 31.76 32.50 30.84 27.62 32.59 LLaMA-Adapter Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 29.95 25.45 26.71 30.66 24.92 25.13 30.94 15.36 11.41 12.88 14.63 11.21 10.25 15.81 7.324 4.233 5.388 6.920 4.678 3.929 7.334 3.875 1.687 2.636 2.309 1.890 1.684 3.804 86.83 86.48 86.65 86.90 84.92 85.85 86.96 31.77 39.98 30.50 31.69 27.12 28.68 31.60 MiniGPT-4 Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 31.22 33.68 32.69 35.12 29.65 33.37 41.20 16.57 20.57 19.87 22.89 19.22 20.28 23.17 9.270 10.72 9.870 12.38 9.878 11.52 13.18 5.190 6.430 5.350 6.770 5.330 5.770 7.580 86.96 86.09 86.06 87.22 85.77 87.01 87.88 31.75 32.39 30.72 33.93 29.83 31.89 35.34 MMGPT Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 27.27 26.11 26.56 30.15 24.59 23.60 32.71 12.66 12.30 12.38 15.11 13.77 10.92 16.24 5.680 5.580 5.600 6.320 5.673 4.610 7.407 2.290 2.250 2.260 3.573 2.882 2.010 3.830 79.79 76.90 80.16 86.62 84.22 83.11 87.01 29.03 28.77 22.09 31.77 25.78 23.43 32.31 InstructBLIP Original CoT Teacher Greedy-Decoding GPT-Ensemble GPT-Teacher LURE (ours) 29.46 24.04 25.61 29.22 26.32 24.91 29.77 14.52 12.61 12.22 13.98 13.11 11.92 15.23 5.670 4.086 4.321 5.605 5.101 4.652 5.708 2.421 1.837 1.963 2.344 2.396 2.097 2.634 86.71 85.50 85.93 86.11 85.04 85.81 87.94 31.64 28.07 29.89 32.57 30.77 29.49 32.95 0.168 0.189 0.192 0.208 0.159 0.215 0.267 0.242 0.211 0.256 0.249 0.201 0.251 0.238 0.179 0.201 0.142 0.211 0.140 0.186 0.223 0.157 0.177 0.142 0.198 0.140 0.182 0.210 0.177 0.192 0.162 0.188 0.156 0.178 0.201 0.218 0.229 0.294 0.276 0.198 0.205 0.307 | 2310.00754#64 | 2310.00754#66 | 2310.00754 | [
"2308.14972"
] |
2310.00754#66 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | where Mh is the number of hallucinatory descriptions, M represents the number of total descrip- tions, and CoScoremean = 1 M Ratio of Uncertainty-Based Hallucinatory Objects. We obtain the Uratio by calculating ratio of the number of hallucination objects with high uncertainty and the total number of objects with high uncertainty: ou 78, 1[UnScore, ; > UnScoremean] Uratio rer = (6) ui M Np+ny a x ? inet fol 1[UnScorem,; > UnScoremean] # where UnScoremean = # M # 1 M (nh+nr) | 2310.00754#65 | 2310.00754#67 | 2310.00754 | [
"2308.14972"
] |
2310.00754#67 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | # Watney nei m=1 # npt+ny # j=1 UnScorem,j. 19 Preprint Table 11: Uncertainty-based hallucination object ratio, co-occurrence-based hallucination object ratio, and sentence-ending hallucination object ratio analysis on several models. Models Co-occurrence CRatio Uncertainty URatio Position SRatio MiniGPT-4 Original LURE (ours) 0.106 0.071 0.221 0.145 0.227 0.139 LLaVa Original LURE (ours) 0.243 0.142 0.103 0.086 0.331 0.139 LLaMA-Adapter Original LURE (ours) 0.295 0.176 0.178 0.102 0.442 0.272 mPLUG-Owl Original LURE (ours) 0.128 0.106 0.229 0.127 0.259 0.151 MMGPT Original LURE (ours) 0.110 0.089 0.157 0.114 0.418 0.154 InstructBLIP Original LURE (ours) 0.213 0.123 0.147 0.090 0.389 0.156 (a) MiniGPT-4 (b) LLaVA Figure 5: Sensitivity analysis of uncertainty threshold using MiniGPT-4 and LLaVA as revisor back- one. Ratio of Hallucinatory Objects in Later Part of the Sentence. For the ratio of hallucinatory objects in later part of the sentence., we calculate the Sratio by calculating ratio of the number of hallucination objects in later part of the sentence and the total number of objects in later part of the sentence: yo ot, 1[PoScore,,; > 7] s=1 i= yu nine 1[PoScorem,; > 7] ; m=1 4: Sratio (7) | 2310.00754#66 | 2310.00754#68 | 2310.00754 | [
"2308.14972"
] |
2310.00754#68 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | where η is the position threshold. Results. Based on the data presented in Table 11, it is evident that all three categories of ratios in the descriptions of LURE reduce when compared to the ratios of the original descriptions. This observa- tion indicates that the elements of uncertainty, co-occurrence, and object position have contributed less to hallucinations in LURE. # C.3 PARAMETER SENSITIVITY ANALYSIS ON UNCERTAINTY To further illustrate the robustness of our model, we conducted a parameter sensitivity analysis on the threshold of uncertainty. The uncertainty threshold θ determines the proportion of replacing â | 2310.00754#67 | 2310.00754#69 | 2310.00754 | [
"2308.14972"
] |
2310.00754#69 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | objectâ with [IDK]. From the Figure 5, we can observe that our model is robust within a certain range of uncertainty threshold. 20 Preprint Original Caption: The image shows a man walk- ing down a rainy sidewalk while holding a bright red umbrella to stay dry. The man walks next to a building as rain pours down, making the umbrella a necessary accessory. In addition to the man with the red umbrella, there are several other people in the scene, some of which are carrying handbags despite the wet conditions. Towards the edge of the image, a person holding a small umbrella can be seen, indicating that others are also trying to shield themselves from the rain. The busy street with multiple passersby creates an atmosphere of a bustling city adapting to the rainy weather. | 2310.00754#68 | 2310.00754#70 | 2310.00754 | [
"2308.14972"
] |
2310.00754#70 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Hallucination Caption: The image shows a man walking down a rainy sidewalk while holding a bright red umbrella to stay dry. Cars driving by with their headlights on contribute to the bustling city atmosphere. The man walks next to a build- ing as rain pours down, making the umbrella a necessary accessory. In addition to the man with the red umbrella, there are several other people in the scene, some of which are carrying handbags despite the wet conditions. Towards the edge of the image, a person holding a small umbrella can be seen, indicating that others are also trying to shield themselves from the rain. The busy street with multiple passersby creates an atmosphere of a bustling city adapting to the rainy weather. | 2310.00754#69 | 2310.00754#71 | 2310.00754 | [
"2308.14972"
] |
2310.00754#71 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | (a) Case 1. Original Caption: The image depicts a woman in a green shirt sitting at a dining table eating a meal. She is using a fork to pick at the food on her plate, which is positioned right in front of her. The table is set with a variety of items like a sand- wich, a bowl, and multiple utensils such as knives and spoons. There are also several cups placed on the table. Although there are other chairs around the table, the woman appears to be din- ing alone, adding a sense of solitude to the scene. Other empty chairs can be seen in various posi- tions around the dining table. Additionally, there are a few other individuals in the background, but they donâ t seem to be engaging with the woman or her meal. Hallucination Caption: The image depicts a woman in a green shirt sitting at a dining table eat- ing a meal. She is using a fork to pick at the food on her plate, which is positioned right in front of her. The table is set with a variety of items like a sandwich, fries, a bowl, and multiple utensils such as knives and spoons. There are also sev- eral cups placed on the table. A napkin and water glass are neatly placed beside her plate. Although there are other chairs around the table, the woman appears to be dining alone, adding a sense of soli- tude to the scene. Other empty chairs can be seen in various positions around the dining table. Ad- ditionally, there are a few other individuals in the background, but they donâ t seem to be engaging with the woman or her meal. The salt and pepper shakers are placed at the center of the table, within easy reach of the woman. | 2310.00754#70 | 2310.00754#72 | 2310.00754 | [
"2308.14972"
] |
2310.00754#72 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | # (b) Case 2. Table 12: Cases of generating hallucinatory descriptions. 21 This image depicts a group of people sitting around a table. The people are wearing different clothes. There is a|window] in the background, and the room appears to be well-lit. The walls of the room are painted white and there are two deGrways that lead to other rooms. The probability of the vocabulary within the red box x Preprint Figure 6: Case of uncertainty in the MiniGPT-4. D ADDITIONAL CASE STUDIES D.1 CASES OF UNCERTAINTY We provide an example using MiniGPT-4 to illustrate the uncertainty present in LVLMs during the decoding process. In the example, we display the word probabilities in the vocabulary at the location of hallucinatory words (sorted in descending order of probability). As shown in Figure 6, we have displayed the decoded tokens and their probabilities at the point where the hallucinatory word â windowâ occurs. We can observe that the probability of the hallucinatory word â windowâ is comparable to that of â bookâ . The uncertainty in the modelâ s decoding path is highly influenced by the text generated earlier, leading to the incorrect selection of the word â windowâ when generating this token. | 2310.00754#71 | 2310.00754#73 | 2310.00754 | [
"2308.14972"
] |
2310.00754#73 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | D.2 CASES OF OUR TRAINING DATASET Here, we present some cases of training data constructed using GPT-3.5, as shown in Table 12. â Original captionâ represents the original standard description, while the â Hallucination captionâ column represents the hallucinated description constructed by GPT-3.5. The red portions in the hallucination captions indicate the hallucinations added by GPT-3.5 based on co-occurring object lists and uncertain object lists. D.3 CASES OF REWRITING CAPTIONS In this section, we present several examples of rectified descriptions to demonstrate the capabilities of LURE in reducing hallucination. From 8 we can find that our model demonstrates a high level of proficiency in removing or substituting hallucinatory objects. D.4 ADDITIONAL CASE COMPARISON BETWEEN LURE AND BASELINES We carefully selected several baselines that demonstrated promising performance based on our ex- perimental results and conducted a thorough comparison with our proposed method. The detailed results of this comparison can be found in Figure 9. Upon comparing the descriptions generated by Revisior with those from the other methods, it becomes evident that Revisior surpasses the others in terms of accuracy and level of detail in describing the image. The description produced by Revisior effectively captures the key elements of the image, such as the presence of a man wearing a white shirt walking on the tennis court while holding a tennis racket, as well as the presence of other individuals in the scene. On the contrary, the other methods fall short in various aspects. | 2310.00754#72 | 2310.00754#74 | 2310.00754 | [
"2308.14972"
] |
2310.00754#74 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | The â Originalâ methodâ s description includes numerous hallucinated objects like the â netâ and â cap.â Although the â CoTâ methodâ s description has fewer hallucinated objects, it is observed that errors in the step-by-step reasoning process, such as incorrectly stating the presence of two tennis players, lead to corresponding errors in subsequent descriptions. While the â Teacherâ methodâ s description is somewhat accurate, it still struggles to eliminate hal- lucinated objects effectively. Although GPT demonstrates strong textual comprehension abilities, it can still make mistakes when rewriting descriptions due to the absence of visual patterns, resulting in the omission of hallucinated objects and introducing errors. | 2310.00754#73 | 2310.00754#75 | 2310.00754 | [
"2308.14972"
] |
2310.00754#75 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 22 Preprint <Input image> Describe this image Kr The image features a woman playing tennis on a clay court. She is in the middle of a swing, holding a tennis racket and preparing to hit the ball. The woman is wearing a pink dress and white shoes, which complement her athletic attire. There are several other SBI in the scene, likely SGI or IEIGHEDIAVERS. positioned around the court. SGSIS0NGHH are standing closer to the woman, while MIB are further away, observing the game. The main focus of the image remains the woman tennis player in action. | 2310.00754#74 | 2310.00754#76 | 2310.00754 | [
"2308.14972"
] |
2310.00754#76 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Under revision!!! The image features a female tennis player standing on a clay tennis court. The player is wearing a red dress and is holding a tennis racket in her right hand, ready to play. Her left hand is raised above her head, possibly to catch the ball during the game. The player appears to be focused and determined, as she is getting ready to make a serve. racket in her right hand, ready to play. Her left hand is raised above her head, possibly to catch the ball during the game. The player appears to be focused and determined, as she is getting ready to make a serve. <Input image> re The image captures a man skillfully riding a wave on a surfboard in the ocean. He is crouching down on his board, maintaining balance and control as. he navigates the wave. The surfer is positioned towards the center of the image, with the wave surrounding him, In the background, there are in the water, possibly observing or waiting for their turn to surf. Some of them are closer to the GIB, while [IEEE are further out in the ocean. The scene conveys the excitement and challenge of surfing, as well as the camaraderie among those who share this passion. | 2310.00754#75 | 2310.00754#77 | 2310.00754 | [
"2308.14972"
] |
2310.00754#77 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Under revision!!! â The image shows a person standing on a surfboard in the ocean, riding a large wave. The person is wearing a black shirt and black pants and is positioned on top of the surfboard as he rides the wave. The wave is large and has a lot of water coming off of it. The person appears to be enjoying the ride. 23 Preprint <Input image> Describe this image I The image shows a man laying on his stomach in a grassy field, with water and a frisbee scattered around him. The man is wearing (gg) HORBEBand is smiling, as if he's having fun playing with the frisbee. The image is sunny, with green grass and trees in the background. There are also a few people visible in the distance, watching Â¥ -* | the man play. g Under revision!!! â | The image shows a man laying on his stomach ina park, with a frisbee scattered around him. The man is smiling, as if he's having fun the frisbee. The image is sunny, with green grass and trees in the background. There are also a few people in the park, watching the scene. | 2310.00754#76 | 2310.00754#78 | 2310.00754 | [
"2308.14972"
] |
2310.00754#78 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Â¥ g Under revision!!! â | The image shows a man laying on his stomach ina park, with a frisbee scattered around him. The man is smiling, as if he's having fun the frisbee. The image is sunny, with green grass and trees in the background. There are also a few people in the park, watching the scene. <Input image> Describe this image I This image is of a man doing a trick ona LURE | snowboard in mid-air with his back facing the camera. The man is wearing a red jacket and black pants, and he is holding onto the snowboard with both hands. The background is a city street with buildings and [Mi in the distance. There are also a * | banners hanging from the buildings. g Under revision!!! N ~The image features a man in mid-air, likely doing a trick on a snowboard. He is wearing a red jacket and black pants, and he is holding onto the snowboard with both hands. The man is the main focus of the scene, as he is the only person visible in the air. | 2310.00754#77 | 2310.00754#79 | 2310.00754 | [
"2308.14972"
] |
2310.00754#79 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Figure 8: Additional cases of rectified descriptions 24 Preprint 1 | ee | Original <Input image> Describe this image The image shows a man in a white shirt and black shorts walking on a tennis court, holding a racket in his hand. He is wearing a [SSIIGB, and his shoes are white with SGRNISGBS. The crowd in the background is watching the match. The court is made of green grass, and there are white lines on it. The man is standing on the right side of the image, and [NBII can be seen in the background. The image is clear and effectively captures the details of the man and the court. al | 2310.00754#78 | 2310.00754#80 | 2310.00754 | [
"2308.14972"
] |
2310.00754#80 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Preprint <Input image> blip2 caption: a man holding a tennis racket Reference caption: {blip2 caption} Please refer to reference caption and describe this picture ee) Teacher The image depicts a man standing on the tennis court, holding a tennis racket in one hand, and preparing to serve the ball. He is wearing a white shirt and appears to be focused on his game. The tennis ball can be seen in the foreground, slightly behind him. There are several other people visible in different positions around the court. Some of them are standing closer to the center of the image, while others are positioned near the edges of it. In the background, (SG are parked on [SERRE adding a sense of urban setting to this outdoor sports event. al CoT CoT in the background. The image is clear and effectively captures the details of the man and the court. <Input image> Please list the main objects in the picture and strictly follow the following format: {objectl, object2, object3......} The main objects in the picture are [EHS] HEGRE. both dressed in white and standing on a tennis court. There are also tennis balls on the court and a racket visible in one of the playersâ hands. Describe this image The image shows [REIBH dressed in white playing a game of tennis. One player is holding a tennis racket and preparing to swing it at the hall while the other player stands elose by, EIRENE They are positioned on a well-maintained tennis court, with tennis ball: ittered around the surface. The setting seems like a typical tennis match with a professional atmosphere. ral ral The image shows [REIBH dressed in white playing a game of tennis. One player is holding a tennis racket and preparing to swing it at the hall while the other player stands elose by, EIRENE They are positioned on a well-maintained tennis court, with tennis ball: ittered around the surface. The setting seems like a typical tennis match with a professional atmosphere. S) GPT Teacher | adding a sense of urban setting to this outdoor sports event. <Input image> blip2 caption: a man holding a tennis racket Reference caption: {blip2 caption} Original description: â The image shows a man in a white shirt and black shorts walking on a tennis court, holding a racket in his hand. | 2310.00754#79 | 2310.00754#81 | 2310.00754 | [
"2308.14972"
] |
2310.00754#81 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | He is wearing a RREIMRH. and his shoes are white with The crowd in the background is watching the match. The court is made of green grass, and there are white lines on it. â The man is standing on the right side of the image, and [ERBIBH can be seen in the background. The image is clear and effectively captures the details of the man and the court. Rewrite the original description to align it with the reference caption, delete some objects that you think are hallucinations, and keep the original format. | 2310.00754#80 | 2310.00754#82 | 2310.00754 | [
"2308.14972"
] |
2310.00754#82 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | Answer: â The image depicts a man holding a tennis racket. He is wearing a white shirt and black shorts. The man is standing on a tennis court with green grass and white lines. In the background, a crowd is watching the match, and SIRI is visible. al cf LURE preparing swing hall while the other player stands elose by, EIRENE They are positioned on a well-maintained tennis court, with tennis ball: ittered around the surface. The setting seems like a typical tennis match with a professional atmosphere. <Input image> Describe this image The image shows a man in a white shirt and black shorts walking on a tennis court, holding a racket in his hand. He is wearing a SSSI. and his shoes are white with | The crowd in the background is watching the match. The court is made of green grass, and there are white lines on it. The man is standing on the right side of the image, and [ERBIHIBH can be seen in the background. The image is clear and effectively captures the details of the man and the court. & Under revision!!! > The image shows a man in a white shirt and black shorts walking on a tennis court, holding a tennis racket in his hand, He appears to be the main focus of the scene, as he is walking towards the right side of the image. The court is made of green grass, and there are white lines on it. al Figure 9: Case study of several strong baselines, including detailed dialogue flow of the real inquiry process for each baseline. | 2310.00754#81 | 2310.00754#83 | 2310.00754 | [
"2308.14972"
] |
2310.00754#83 | Analyzing and Mitigating Object Hallucination in Large Vision-Language Models | 25 | 2310.00754#82 | 2310.00754 | [
"2308.14972"
] |
|
2309.16609#0 | Qwen Technical Report | 3 2 0 2 p e S 8 2 ] L C . s c [ 1 v 9 0 6 6 1 . 9 0 3 2 : v i X r a # QWEN TECHNICAL REPORT Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu. # Qwen Team, Alibaba Groupâ | 2309.16609#1 | 2309.16609 | [
"2305.20050"
] |
|
2309.16609#1 | Qwen Technical Report | # ABSTRACT Large language models (LLMs) have revolutionized the field of artificial intelli- gence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce QWEN1, the first install- ment of our large language model series. QWEN is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes QWEN, the base pretrained language models, and QWEN-CHAT, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models pos- sess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, CODE-QWEN and CODE-QWEN-CHAT, as well as mathematics-focused models, MATH-QWEN-CHAT, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models. | 2309.16609#0 | 2309.16609#2 | 2309.16609 | [
"2305.20050"
] |
2309.16609#2 | Qwen Technical Report | â Authors are ordered alphabetically by the last name. Correspondence to: [email protected]. 1QWEN is a moniker of Qianwen, which means â thousands of promptsâ in Chinese. The pronunciation of â QWENâ can vary depending on the context and the individual speaking it. Here is one possible way to pronounce it: /kwEn/. 1 # Contents 2.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Context Length Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Supervised Finetuning . . . 3.1.1 Data . . 3.1.2 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Reinforcement Learning from Human Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.1 Reward Model 10 3.2.2 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 . . . . 3.3 Automatic and Human Evaluation of Aligned Models . . . . . . . . . . . . . . . . . 11 3.4 Tool Use, Code Interpreter, and Agent . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Code Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Code Supervised Fine-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Large Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Tool Use and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.4 LLM for Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.5 LLM for Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2309.16609#1 | 2309.16609#3 | 2309.16609 | [
"2305.20050"
] |
2309.16609#3 | Qwen Technical Report | A.1 More Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.1 Data Format for QWEN-CHAT . . . . . . . . . . . . . . A.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2.1 Automatic Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2.2 Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4 4 6 6 7 7 8 9 9 10 10 13 16 16 17 17 17 17 20 20 20 20 22 22 36 36 36 36 36 40 # A.3 Analysis of Code Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2309.16609#2 | 2309.16609#4 | 2309.16609 | [
"2305.20050"
] |
2309.16609#4 | Qwen Technical Report | 2 58 1 # INTRODUCTION Large language models (LLMs) (Radford et al., 2018; Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; OpenAI, 2023; Chowdhery et al., 2022; Anil et al., 2023; Thoppilan et al., 2022; Touvron et al., 2023a;b) have revolutionized the field of artificial intelligence (AI) by providing a powerful foundation for complex reasoning and problem-solving tasks. These models have the ability to compress vast knowledge into neural networks, making them incredibly versatile agents. With a chat interface, LLMs can perform tasks that were previously thought to be the exclusive domain of humans, especially those involving creativity and expertise (OpenAI, 2022; Ouyang et al., 2022; Anil et al., 2023; Google, 2023; Anthropic, 2023a;b). They can engage in natural language conversations with humans, answering questions, providing information, and even generating creative content such as stories, poems, and music. This has led to the development of a wide range of applications, from chatbots and virtual assistants to language translation and summarization tools. LLMs are not just limited to language tasks. They can also function as a generalist agent (Reed et al., 2022; Bai et al., 2022a; Wang et al., 2023a; AutoGPT, 2023; Hong et al., 2023), collaborating with external systems, tools, and models to achieve the objectives set by humans. For example, LLMs can understand multimodal instructions (OpenAI, 2023; Bai et al., 2023; Liu et al., 2023a; Ye et al., 2023; Dai et al., 2023; Peng et al., 2023b), execute code (Chen et al., 2021; Zheng et al., 2023; Li et al., 2023d), use tools (Schick et al., 2023; LangChain, Inc., 2023; AutoGPT, 2023), and more. This opens up a whole new world of possibilities for AI applications, from autonomous vehicles and robotics to healthcare and finance. | 2309.16609#3 | 2309.16609#5 | 2309.16609 | [
"2305.20050"
] |
2309.16609#5 | Qwen Technical Report | As these models continue to evolve and improve, we can expect to see even more innovative and exciting applications in the years to come. Whether itâ s helping us solve complex problems, creating new forms of entertainment, or transforming the way we live and work, LLMs are poised to play a central role in shaping the future of AI. -â - | Code-Qwen -â >| Code-Qwen-Chat Pretrain Models RM Models SFT Models â â >| Math-Qwen-Chat RLHF Medels -->| i f 3 ss \ \ Vv 3 | 2309.16609#4 | 2309.16609#6 | 2309.16609 | [
"2305.20050"
] |
2309.16609#6 | Qwen Technical Report | Figure 1: Model Lineage of the Qwen Series. We have pretrained the language models, namely QWEN, on massive datasets containing trillions of tokens. We then use SFT and RLHF to align QWEN to human preference and thus we have QWEN-CHAT and specifically its improved version QWEN-CHAT-RLHF. Additionally, we also develop specialized models for coding and mathematics, such as CODE-QWEN, CODE-QWEN-CHAT, and MATH-QWEN-CHAT based on QWEN with similar techniques. Note that we previously released the multimodal LLM, QWEN-VL and QWEN-VL- CHAT (Bai et al., 2023), which are also based on our QWEN base models. | 2309.16609#5 | 2309.16609#7 | 2309.16609 | [
"2305.20050"
] |
2309.16609#7 | Qwen Technical Report | Despite their impressive capabilities, LLMs are often criticized for their lack of reproducibility, steerability, and accessibility to service providers. In this work, we are pleased to present and release the initial version of our LLM series, QWEN. QWEN is a moniker that derives from the Chinese phrase Qianwen, which translates to â thousands of promptsâ and conveys the notion of embracing a wide range of inquiries. QWEN is a comprehensive language model series that encompasses distinct models with varying parameter counts. The model series include the base pretrained language models, chat models finetuned with human alignment techniques, i.e., supervised finetuning (SFT), reinforcement learning with human feedback (RLHF), etc., as well as specialized models in coding and math. | 2309.16609#6 | 2309.16609#8 | 2309.16609 | [
"2305.20050"
] |
2309.16609#8 | Qwen Technical Report | The details are outlined below: 3 1. The base language models, namely QWEN, have undergone extensive training using up to 3 trillion tokens of diverse texts and codes, encompassing a wide range of areas. These models have consistently demonstrated superior performance across a multitude of downstream tasks, even when compared to their more significantly larger counterparts. 2. The QWEN-CHAT models have been carefully finetuned on a curated dataset relevant to task performing, chat, tool use, agent, safety, etc. | 2309.16609#7 | 2309.16609#9 | 2309.16609 | [
"2305.20050"
] |
2309.16609#9 | Qwen Technical Report | The benchmark evaluation demonstrates that the SFT models can achieve superior performance. Furthermore, we have trained reward models to mimic human preference and applied them in RLHF for chat models that can produce responses preferred by humans. Through the human evaluation of a challenging test, we find that QWEN-CHAT models trained with RLHF are highly competitive, still falling behind GPT-4 on our benchmark. 3. In addition, we present specialized models called CODE-QWEN, which includes CODE- QWEN-7B and CODE-QWEN-14B, as well as their chat models, CODE-QWEN-14B- CHAT and CODE-QWEN-7B-CHAT. Specifically, CODE-QWEN has been pre-trained on extensive datasets of code and further fine-tuned to handle conversations related to code generation, debugging, and interpretation. The results of experiments conducted on benchmark datasets, such as HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and HumanEvalPack (Muennighoff et al., 2023), demonstrate the high level of proficiency of CODE-QWEN in code understanding and generation. | 2309.16609#8 | 2309.16609#10 | 2309.16609 | [
"2305.20050"
] |
2309.16609#10 | Qwen Technical Report | 4. This research additionally introduces MATH-QWEN-CHAT specifically designed to tackle mathematical problems. Our results show that both MATH-QWEN-7B-CHAT and MATH- QWEN-14B-CHAT outperform open-sourced models in the same sizes with large margins and are approaching GPT-3.5 on math-related benchmark datasets such as GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). | 2309.16609#9 | 2309.16609#11 | 2309.16609 | [
"2305.20050"
] |
2309.16609#11 | Qwen Technical Report | 5. Besides, we have open-sourced QWEN-VL and QWEN-VL-CHAT, which have the versatile ability to comprehend visual and language instructions. These models outperform the current open-source vision-language models across various evaluation benchmarks and support text recognition and visual grounding in both Chinese and English languages. Moreover, these models enable multi-image conversations and storytelling. Further details can be found in Bai et al. (2023). Now, we officially open-source the 14B-parameter and 7B-parameter base pretrained models QWEN and aligned chat models QWEN-CHAT2. This release aims at providing more comprehensive and powerful LLMs at developer- or application-friendly scales. | 2309.16609#10 | 2309.16609#12 | 2309.16609 | [
"2305.20050"
] |
2309.16609#12 | Qwen Technical Report | The structure of this report is as follows: Section 2 describes our approach to pretraining and results of QWEN. Section 3 covers our methodology for alignment and reports the results of both automatic evaluation and human evaluation. Additionally, this section describes details about our efforts in building chat models capable of tool use, code interpreter, and agent. In Sections 4 and 5, we delve into specialized models of coding and math and their performance. Section 6 provides an overview of relevant related work, and Section 7 concludes this paper and points out our future work. | 2309.16609#11 | 2309.16609#13 | 2309.16609 | [
"2305.20050"
] |
2309.16609#13 | Qwen Technical Report | # 2 PRETRAINING The pretraining stage involves learning vast amount of data to acquire a comprehensive understanding of the world and its various complexities. This includes not only basic language capabilities but also advanced skills such as arithmetic, coding, and logical reasoning. In this section, we introduce the data, the model design and scaling, as well as the comprehensive evaluation results on benchmark datasets. # 2.1 DATA The size of data has proven to be a crucial factor in developing a robust large language model, as highlighted in previous research (Hoffmann et al., 2022; Touvron et al., 2023b). To create an effective pretraining dataset, it is essential to ensure that the data are diverse and cover a wide range | 2309.16609#12 | 2309.16609#14 | 2309.16609 | [
"2305.20050"
] |
2309.16609#14 | Qwen Technical Report | # 2GitHub: https://github.com/QwenLM/Qwen. 4 =--- # GPT-4 ---- GPT-3.5 â eâ # Previous 13B SOTA â *â # Qwen-14B # MMLU BBH C-Eval PIQA AGIEval HellaSwag Gaokao-Bench CSQA GSM8K MBPP MATH HumaneEval Figure 2: Performance of GPT-4, GPT-3.5, the previous 13B SOTA, as well as QWEN-14B. | 2309.16609#13 | 2309.16609#15 | 2309.16609 | [
"2305.20050"
] |
2309.16609#15 | Qwen Technical Report | We demonstrate the results on 12 datasets covering multiple domains, including language understanding, knowledge, reasoning, etc. QWEN significantly outperforms the previous SOTA of similar model sizes, but still lag behind both GPT-3.5 and GPT-4. of types, domains, and tasks. Our dataset is designed to meet these requirements and includes public web documents, encyclopedia, books, codes, etc. Additionally, our dataset is multilingual, with a significant portion of the data being in English and Chinese. To ensure the quality of our pretraining data, we have developed a comprehensive data preprocessing procedure. For public web data, we extract text from HTML and use language identification tools to determine the language. To increase the diversity of our data, we employ deduplication techniques, including exact-match deduplication after normalization and fuzzy deduplication using MinHash and LSH algorithms. To filter out low-quality data, we employ a combination of rule-based and machine-learning-based methods. Specifically, we use multiple models to score the content, including language models, text-quality scoring models, and models for identifying potentially offensive or inappropriate content. We also manually sample texts from various sources and review them to ensure their quality. To further enhance the quality of our data, we selectively up-sample data from certain sources, to ensure that our models are trained on a diverse range of high-quality content. | 2309.16609#14 | 2309.16609#16 | 2309.16609 | [
"2305.20050"
] |
2309.16609#16 | Qwen Technical Report | In recent studies (Zeng et al., 2022; Aribandi et al., 2021; Raffel et al., 2020), it has been demonstrated that pretraining language models with multi-task instructions can enhance their zero-shot and few-shot performance. To further enhance the performance of our model, we have incorporated high-quality instruction data into our pretraining process. To safeguard the integrity of our benchmark assessment, we have adopted a similar approach as Brown et al. (2020) and meticulously eliminated any instruction | 2309.16609#15 | 2309.16609#17 | 2309.16609 | [
"2305.20050"
] |
2309.16609#17 | Qwen Technical Report | 5 Compression Ratio o cue. 20 he = o vi a a ¥ 6 5 0 By * e a fe = code Languages Figure 3: Encoding compression rates of different models. We randomly selected 1 million document corpora of each language to test and compare the encoding compression rates of different models (with XLM-R (Conneau et al., 2019), which supports 100 languages, as the base value 1, not shown in the figure). As can be seen, while ensuring the efficient decoding of Chinese, English, and code, QWEN also achieves a high compression rate for many other languages (such as th, he, ar, ko, vi, ja, tr, id, pl, ru, nl, pt, it, de, es, fr, etc.), equipping the model with strong scalability as well as high training and inference efficiency in these languages. samples that exhibit a 13-gram overlap with any data present in the test sets utilized in our evaluation. | 2309.16609#16 | 2309.16609#18 | 2309.16609 | [
"2305.20050"
] |
2309.16609#18 | Qwen Technical Report | Given the large number of downstream tasks, it is not feasible to repeat this filtering process for all tasks. Instead, we have made sure that the instruction data for the reported tasks have undergone our filtering process to ensure their accuracy and reliability. Finally, we have built a dataset of up to 3 trillion tokens. 2.2 TOKENIZATION The design of vocabulary significantly impacts the training efficiency and the downstream task performance. In this study, we utilize byte pair encoding (BPE) as our tokenization method, following GPT-3.5 and GPT-4. We start with the open-source fast BPE tokenizer, tiktoken (Jain, 2022), and select the vocabulary cl100k base as our starting point. To enhance the performance of our model on multilingual downstream tasks, particularly in Chinese, we augment the vocabulary with commonly used Chinese characters and words, as well as those in other languages. Also, following Touvron et al. (2023a;b), we have split numbers into single digits. The final vocabulary size is approximately 152K. The performance of the QWEN tokenizer in terms of compression is depicted in Figure 3. In this comparison, we have evaluated QWEN against several other tokenizers, including XLM-R (Conneau et al., 2019), LLaMA (Touvron et al., 2023a), Baichuan (Inc., 2023a), and InternLM (InternLM Team, 2023). Our findings reveal that QWEN achieves higher compression efficiency than its competitors in most languages. This implies that the cost of serving can be significantly reduced since a smaller number of tokens from QWEN can convey more information than its competitors. Furthermore, we have conducted preliminary experiments to ensure that scaling the vocabulary size of QWEN does not negatively impact the downstream performance of the pretrained model. Despite the increase in vocabulary size, our experiments have shown that QWEN maintains its performance levels in downstream evaluation. | 2309.16609#17 | 2309.16609#19 | 2309.16609 | [
"2305.20050"
] |
2309.16609#19 | Qwen Technical Report | # 2.3 ARCHITECTURE QWEN is designed using a modified version of the Transformer architecture. Specifically, we have adopted the recent open-source approach of training large language models, LLaMA (Touvron et al., 2023a), which is widely regarded as the top open-source LLM. Our modifications to the architecture include: 6 Table 1: Model sizes, architectures, and optimization hyper-parameters. 1.8B 7B 14B 2048 4096 5120 16 32 40 Layers 24 32 40 Learning rate Batch size 3.0 Ã | 2309.16609#18 | 2309.16609#20 | 2309.16609 | [
"2305.20050"
] |
2309.16609#20 | Qwen Technical Report | 10â 4 3.0 Ã 10â 4 3.0 Ã 10â 4 4M 4M 4M Training tokens 2.2T 2.4T 3.0T # # of Params Hidden size Heads â ¢ Embedding and output projection. Based on preliminary experimental findings, we have opted for the untied embedding approach instead of tying the weights of input embedding and output projection. This decision was made in order to achieve better performance with the price of memory costs. | 2309.16609#19 | 2309.16609#21 | 2309.16609 | [
"2305.20050"
] |
2309.16609#21 | Qwen Technical Report | â ¢ Positional embedding. We have chosen RoPE (Rotary Positional Embedding) (Su et al., 2021) as our preferred option for incorporating positional information into our model. RoPE has been widely adopted and has demonstrated success in contemporary large language models, notably PaLM (Chowdhery et al., 2022; Anil et al., 2023) and LLaMA (Touvron et al., 2023a;b). In particular, we have opted to use FP32 precision for the inverse frequency matrix, rather than BF16 or FP16, in order to prioritize model performance and achieve higher accuracy. | 2309.16609#20 | 2309.16609#22 | 2309.16609 | [
"2305.20050"
] |
2309.16609#22 | Qwen Technical Report | â ¢ Bias. For most layers, we remove biases following Chowdhery et al. (2022), but we add biases in the QKV layer of attention to enhance the extrapolation ability of the model (Su, 2023b). â ¢ Pre-Norm & RMSNorm. In modern Transformer models, pre-normalization is the most widely used approach, which has been shown to improve training stability compared to post-normalization. Recent research has suggested alternative methods for better training stability, which we plan to explore in future versions of our model. Additionally, we have replaced the traditional layer normalization technique described in (Ba et al., 2016) with RMSNorm (Jiang et al., 2023). This change has resulted in equivalent performance while also improving efficiency. | 2309.16609#21 | 2309.16609#23 | 2309.16609 | [
"2305.20050"
] |
2309.16609#23 | Qwen Technical Report | â ¢ Activation function. We have selected SwiGLU (Shazeer, 2020) as our activation function, a combination of Swish (Ramachandran et al., 2017) and Gated Linear Unit (Dauphin et al., 2017). Our initial experiments have shown that activation functions based on GLU generally outperform other baseline options, such as GeLU (Hendrycks & Gimpel, 2016). As is common practice in previous research, we have reduced the dimension of the feed-forward network (FFN) from 4 times the hidden size to 8 | 2309.16609#22 | 2309.16609#24 | 2309.16609 | [
"2305.20050"
] |
2309.16609#24 | Qwen Technical Report | 2.4 TRAINING To train QWEN, we follow the standard approach of autoregressive language modeling, as described in Radford et al. (2018). This involves training the model to predict the next token based on the context provided by the previous tokens. We train models with context lengths of 2048. To create batches of data, we shuffle and merge the documents, and then truncate them to the specified context lengths. To improve computational efficiency and reduce memory usage, we employ Flash Attention in the attention modules (Dao et al., 2022). We adopt the standard optimizer AdamW (Kingma & Ba, 2014; Loshchilov & Hutter, 2017) for pretraining optimization. We set the hyperparameters β1 = 0.9, β2 = 0.95, and ϵ = 10â | 2309.16609#23 | 2309.16609#25 | 2309.16609 | [
"2305.20050"
] |
2309.16609#25 | Qwen Technical Report | 8. We use a cosine learning rate schedule with a specified peak learning rate for each model size. The learning rate is decayed to a minimum learning rate of 10% of the peak learning rate. All the models are trained with BFloat16 mixed precision for training stability. 2.5 CONTEXT LENGTH EXTENSION Transformer models have a significant limitation in terms of the context length for their attention mechanism. As the context length increases, the quadratic-complexity computation leads to a drastic increase in both computation and memory costs. In this work, we have implemented simple training-free techniques that are solely applied during inference to extend the context length of the model. One of the key techniques we have used is NTK-aware interpolation (bloc97, 2023). | 2309.16609#24 | 2309.16609#26 | 2309.16609 | [
"2305.20050"
] |
2309.16609#26 | Qwen Technical Report | 7 Table 2: Overall performance on widely-used benchmarks compared to open-source base models. Our largest QWEN model with 14 billion parameters outperforms previous 13B SoTA models on all datasets. Model Params MMLU C-Eval GSM8K MATH HumanEval MBPP 3-shot 5-shot 5-shot 8-shot 4-shot 0-shot MPT 7B 30B 30.8 47.9 23.5 - 9.1 15.2 3.0 3.1 18.3 25.0 22.8 32.8 Falcon 7B 40B 27.8 57.0 - - 6.8 19.6 2.3 5.5 - - 11.2 29.8 ChatGLM2 6B 47.9 51.7 32.4 6.5 - - InternLM 7B 20B 51.0 62.1 53.4 58.8 31.2 52.6 6.3 7.9 10.4 25.6 14.0 35.6 Baichuan2 7B 13B 54.7 59.5 56.3 59.0 24.6 52.8 5.6 10.1 18.3 17.1 24.2 30.2 LLaMA 7B 13B 33B 65B 35.6 47.7 58.7 63.7 27.3 31.8 37.5 40.4 11.0 20.3 42.3 54.4 2.9 4.2 7.1 10.6 12.8 15.8 21.7 23.7 17.7 22.0 30.2 37.7 LLAMA 2 7B 13B 34B 70B 46.8 55.0 62.6 69.8 32.5 41.4 - 50.1 16.7 29.6 42.2 63.3 3.3 5.0 6.2 13.5 12.8 18.9 22.6 29.9 20.8 30.3 33.0 45.0 StableBeluga2 70B 68.6 51.4 69.6 14.6 28.0 11.4 QWEN 1.8B 7B 14B 44.6 58.2 66.3 54.7 63.5 72.1 21.2 51.7 61.3 5.6 11.6 24.8 17.1 29.9 32.3 14.8 31.6 40.8 BBH 3-shot 35.6 38.0 28.0 37.1 33.7 37.0 52.5 41.6 49.0 33.5 37.9 50.0 58.4 38.2 45.6 44.1 64.9 69.3 28.2 45.0 53.4 | 2309.16609#25 | 2309.16609#27 | 2309.16609 | [
"2305.20050"
] |
2309.16609#27 | Qwen Technical Report | Unlike position interpolation (PI) (Chen et al., 2023a) which scales each dimension of RoPE equally, NTK-aware interpolation adjusts the base of RoPE to prevent the loss of high-frequency information in a training-free manner. To further improve performance, we have also implemented a trivial extension called dynamic NTK-aware interpolation, which is later formally discussed in (Peng et al., 2023a). It dynamically changes the scale by chunks, avoiding severe performance degradation. These techniques allow us to effectively extend the context length of Transformer models without compromising their computational efficiency or accuracy. | 2309.16609#26 | 2309.16609#28 | 2309.16609 | [
"2305.20050"
] |
2309.16609#28 | Qwen Technical Report | QWEN additionally incorporates two attention mechanisms: LogN-Scaling (Chiang & Cholak, 2022; Su, 2023a) and window attention (Beltagy et al., 2020). LogN-Scaling rescales the dot product of the query and value by a factor that depends on the ratio of the context length to the training length, ensuring that the entropy of the attention value remains stable as the context length grows. Window attention restricts the attention to a limited context window, preventing the model from attending to tokens that are too far away. We also observed that the long-context modeling ability of our model varies across layers, with lower layers being more sensitive in context length extension compared to the higher layers. To leverage this observation, we assign different window sizes to each layer, using shorter windows for lower layers and longer windows for higher layers. | 2309.16609#27 | 2309.16609#29 | 2309.16609 | [
"2305.20050"
] |
2309.16609#29 | Qwen Technical Report | 2.6 EXPERIMENTAL RESULTS To evaluate the zero-shot and few-shot learning capabilities of our models, we conduct a thor- ough benchmark assessment using a series of datasets. We compare QWEN with the most recent open-source base models, including LLaMA (Touvron et al., 2023a), LLAMA 2 (Touvron et al., 2023b), MPT (Mosaic ML, 2023), Falcon (Almazrouei et al., 2023), Baichuan2 (Yang et al., 2023), ChatGLM2 (ChatGLM2 Team, 2023), InternLM (InternLM Team, 2023), XVERSE (Inc., 2023b), and StableBeluga2 (Stability AI, 2023). Our evaluation covers a total of 7 popular benchmarks, | 2309.16609#28 | 2309.16609#30 | 2309.16609 | [
"2305.20050"
] |
2309.16609#30 | Qwen Technical Report | 8 Table 3: Results of QWEN on long-context inference using various techniques. Our experimental findings reveal that the application of our crucial techniques enables the model to consistently achieve low perplexity as the context length increases. This suggests that these techniques play a significant role in enhancing the modelâ s ability to comprehend and generate lengthy texts. Model 1024 Sequence Length 2048 4096 8192 16384 QWEN-7B + dynamic ntk + dynamic ntk + logn + dynamic ntk + logn + window attn 4.23 4.23 4.23 4.23 3.78 3.78 3.78 3.78 39.35 3.59 3.58 3.58 469.81 3.66 3.56 3.49 2645.09 5.71 4.62 4.32 QWEN-14B + dynamic ntk + logn + window attn - - 3.46 3.46 22.79 3.29 334.65 3.18 3168.35 3.42 | 2309.16609#29 | 2309.16609#31 | 2309.16609 | [
"2305.20050"
] |
2309.16609#31 | Qwen Technical Report | which are MMLU (5-shot) (Hendrycks et al., 2020), C-Eval (5-shot) (Huang et al., 2023), GSM8K (8-shot) (Cobbe et al., 2021), MATH (4-shot) (Hendrycks et al., 2021), HumanEval (0-shot) (Chen et al., 2021), MBPP (0-shot) (Austin et al., 2021), and BBH (Big Bench Hard) (3 shot) (Suzgun et al., 2022). We aim to provide a comprehensive summary of the overall performance of our models across these benchmarks. In this evaluation, we focus on the base language models without alignment and collect the baselinesâ best scores from their official results and OpenCompass (OpenCompass Team, 2023). The results are presented in Table 2. Our experimental results demonstrate that the three QWEN models exhibit exceptional performance across all downstream tasks. It is worth noting that even the larger models, such as LLaMA2-70B, are outperformed by QWEN-14B in 3 tasks. QWEN-7B also performs admirably, surpassing LLaMA2- 13B and achieving comparable results to Baichuan2-13B. Notably, despite having a relatively small number of parameters, QWEN-1.8B is capable of competitive performance on certain tasks and even outperforms larger models in some instances. The findings highlight the impressive capabilities of the QWEN models, particularly QWEN-14B, and suggest that smaller models, such as QWEN-1.8B, can still achieve strong performance in certain applications. | 2309.16609#30 | 2309.16609#32 | 2309.16609 | [
"2305.20050"
] |
2309.16609#32 | Qwen Technical Report | To evaluate the effectiveness of context length extension, Table 3 presents the test results on arXiv3 in terms of perplexity (PPL). These results demonstrate that by combining NTK-aware interpolation, LogN-Scaling, and layer-wise window assignment, we can effectively maintain the performance of our models in the context of over 8192 tokens. # 3 ALIGNMENT Pretrained large language models have been found to be not aligned with human behavior, making them unsuitable for serving as AI assistants in most cases. Recent research has shown that the use of alignment techniques, such as supervised finetuning (SFT) and reinforcement learning from human feedback (RLHF), can significantly improve the ability of language models to engage in natural conversation. In this section, we will delve into the details of how QWEN models have been trained using SFT and RLHF, and evaluate their performance in the context of chat-based assistance. 3.1 SUPERVISED FINETUNING To gain an understanding of human behavior, the initial step is to carry out SFT, which finetunes a pretrained LLM on chat-style data, including both queries and responses. In the following sections, we will delve into the details of data construction and training methods. | 2309.16609#31 | 2309.16609#33 | 2309.16609 | [
"2305.20050"
] |
2309.16609#33 | Qwen Technical Report | # 3The dataset contains academic papers from https://arxiv.org. 9 3.1.1 DATA To enhance the capabilities of our supervised finetuning datasets, we have annotated conversations in multiple styles. While conventional datasets (Wei et al., 2022a) contain a vast amount of data prompted with questions, instructions, and answers in natural language, our approach takes it a step further by annotating human-style conversations. This practice, inspired by Ouyang et al. (2022), aims at improving the modelâ s helpfulness by focusing on natural language generation for diverse tasks. To ensure the modelâ s ability to generalize to a wide range of scenarios, we specifically excluded data formatted in prompt templates that could potentially limit its capabilities. Furthermore, we have prioritized the safety of the language model by annotating data related to safety concerns such as violence, bias, and pornography. In addition to data quality, we have observed that the training method can significantly impact the final performance of the model. To achieve this, we utilized the ChatML-style format (OpenAI, 2022), which is a versatile meta language capable of describing both the metadata (such as roles) and the content of a turn. This format enables the model to effectively distinguish between various types of information, including system setup, user inputs, and assistant outputs, among others. By leveraging this approach, we can enhance the modelâ s ability to accurately process and analyze complex conversational data. | 2309.16609#32 | 2309.16609#34 | 2309.16609 | [
"2305.20050"
] |
2309.16609#34 | Qwen Technical Report | # 3.1.2 TRAINING Consistent with pretraining, we also apply next-token prediction as the training task for SFT. We apply the loss masks for the system and user inputs. More details are demonstrated in Section A.1.1. The modelâ s training process utilizes the AdamW optimizer, with the following hyperparameters: β1 set to 0.9, β2 set to 0.95, and ϵ set to 10â | 2309.16609#33 | 2309.16609#35 | 2309.16609 | [
"2305.20050"
] |
2309.16609#35 | Qwen Technical Report | 8. The sequence length is limited to 2048, and the batch size is 128. The model undergoes a total of 4000 steps, with the learning rate gradually increased over the first 1430 steps, reaching a peak of 2 Ã 10â 6. To prevent overfitting, weight decay is applied with a value of 0.1, dropout is set to 0.1, and gradient clipping is enforced with a limit of 1.0. 3.2 REINFORCEMENT LEARNING FROM HUMAN FEEDBACK While SFT has proven to be effective, we acknowledge that its generalization and creativity capa- bilities may be limited, and it is prone to overfitting. To address this issue, we have implemented Reinforcement Learning from Human Feedback (RLHF) to further align SFT models with human preferences, following the approaches of Ouyang et al. (2022); Christiano et al. (2017). This process involves training a reward model and using Proximal Policy Optimization (PPO) (Schulman et al., 2017) to conduct policy training. 3.2.1 REWARD MODEL To create a successful reward model, like building a large language model (LLM), it is crucial to first undergo pretraining and then finetuning. This pretraining process, also known as preference model pretraining (PMP) (Bai et al., 2022b), necessitates a vast dataset of comparison data. This dataset consists of sample pairs, each containing two distinct responses for a single query and their corresponding preferences. Similarly, finetuning is also conducted on this type of comparison data, but with a higher quality due to the presence of quality annotations. During the fine-tuning phase, we gather a variety of prompts and adjust the reward model based on human feedback for responses from the QWEN models. To ensure the diversity and complexity of user prompts are properly taken into account, we have created a classification system with around 6600 detailed tags and implemented a balanced sampling algorithm that considers both diversity and complexity when selecting prompts for annotation by the reward model (Lu et al., 2023). To generate a wide range of responses, we have utilized QWEN models of different sizes and sampling strategies, as diverse responses can help reduce annotation difficulties and enhance the performance of the reward model. | 2309.16609#34 | 2309.16609#36 | 2309.16609 | [
"2305.20050"
] |
2309.16609#36 | Qwen Technical Report | These responses are then evaluated by annotators following a standard annotation guideline, and comparison pairs are formed based on their scores. In creating the reward model, we utilize the same-sized pre-trained language model QWEN to initiate the process. It is important to mention that we have incorporated a pooling layer into the original 10 Table 4: Test Accuracy of QWEN preference model pretraining (PMP) and reward model (RM) on diverse human preference benchmark datasets. Dataset QWEN QWEN Anthropic Anthropic Helpful-base Helpful-online Helpful-base Helpful-online OpenAI Stanford Summ. | 2309.16609#35 | 2309.16609#37 | 2309.16609 | [
"2305.20050"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.