doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.16609
258
sadf[‘sizes'], cadf{‘colors‘]) # Show the plot plt.show() plt.show() Observation: Observation: 8 8 0 Thought: The code interpreter returned the result. Thought-The plot is now complete. We can see that the points are scattered according to the x an dy values, and the size and color of the points are determined by the values in the "sizes" and “c Final Answer; The scatter plot has been successfully created using the data from the "scatter_dat as olors" columns of the dataframe. a.csv" file, The size of each point is determined by the ‘sizes’ column, and the color is determined by the ‘colors’ column. This allows for a high degree of customization in the appearance of the sc Final Answer: The scatter plot is now complete. atter plot. [ user fesin: (Uploaded scatter_data.csv] Create a scatter plot with different size and color settings Observation: error: © Copy code Code Thought: | should use the code_interpreter tool to execute Python code to create a scatter plot wi th different size and color settings. Action: code_interpreter KeyError Traceback (most recent call las Action Input t) Input In
2309.16609#258
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
259
execute Python code to create a scatter plot wi th different size and color settings. Action: code_interpreter KeyError Traceback (most recent call las Action Input t) Input In [942], in <cell line: 38>() py © Copy code 35 data = pd.read_csv("tmp_upload_file/kZvwpQmL2RjSFSVSHHXNST/scatte r_data.csv') import matplotlib.pyplot as pit 37 # Create a scatter plot with different size and color settings ---> 38 plt.scatter(data['x'], data["y'], sadata["size'], cadata[‘colo # Load the data r')) data = pd.read_csv('scatter_data.csv") 40 # Show the plot 41 plt.show() # Create a scatter plot with different size and color settings plt.scatter(data['x'], data['y'], s=data['size’], c=data[{"color']) # Show the plot KeyError: ‘size’ plt.show()
2309.16609#259
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
261
3 0.273015 3.678973 63.538505 5.963053 4 3.445224 2.462327 59.921846 27.207964 Thought: Now | understand the data format. | need to use the code_interpreter API to run the co de Action: code_interpreter Action Input: py © Copy code import matplotlib.pyplot as plt import pandas as pd df = pd.read_csv(‘scatter_data.csyv") fig, ax = plt.subplots() ax.scatter(df['x'], df['y'], sadf[‘sizes'], cadf{‘colors‘]) plt.show() Observation: 8 0 Thought: The code interpreter returned the result. Final Answer; The scatter plot has been successfully created using the data from the "scatter_dat a.csv" file, The size of each point is determined by the ‘sizes’ column, and the color is determined by the ‘colors’ column. This allows for a high degree of customization in the appearance of the sc atter plot.
2309.16609#261
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
262
Thought: Firstly, | need to use the code interpreter API to load the data and taking a look at the first f ew rows to understand what we're working with. Action: code_interpreter Action Input: py © Copy code import pandas as pd # Load the data df = pd.read_csv("scatter_date.csv") # Display the first few rows of the dataframe df. head( ) Observation: The output is: © Copy code x y sizes colors 7.577257 1.952430 46.433060 24.878965 4.873626 2.574814 38.541454 21.506782 4.192995 5.250496 62.999630 23.484101 @.273015 3.678973 63.538505 50.963053 3.445224 2.462327 59.921040 27.207964 Thought: Now that we have the data loaded, we can start thinking about how to create the scatter plot. W e'll need to decide on the x-axis and y-axis labels, as well as the size and color settings for the poi nts. Action: code_interpreter Action Input: py © Copy code import matplotlib.pyplot as plt # Set the x-axis label
2309.16609#262
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
263
color settings for the poi nts. Action: code_interpreter Action Input: py © Copy code import matplotlib.pyplot as plt # Set the x-axis label plt.xlabel("x") # Set the y-axis label plt.ylabel("y") # Set the size of the points plt.scatter(df["x"], df["y"], s=df["sizes"], c=df["colors”]) # Show the plot plt.show() Observation: 8 Thought-The plot is now complete. We can see that the points are scattered according to the x an dy values, and the size and color of the points are determined by the values in the "sizes" and “c as olors" columns of the dataframe. Final Answer: The scatter plot is now complete.
2309.16609#263
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.16609
264
Figure 5: Example showcasing QWEN-CHAT’s ability in using a code interpreter via ReAct prompting. The ReAct instruction is omitted for clarity. QWEN creates a two-step plan and first investigates the columns present in the CSV file before proceeding to draw the plot, as shown in the top-left figure. CODE LLAMA, however, attempts to draw the plot based on non-existent columns in its initial attempt, as seen in the bottom figure. CODE LLAMA can only reliably perform the task if the columns are provided in the user query, as shown in the top-right figure. 59
2309.16609#264
Qwen Technical Report
Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.
http://arxiv.org/pdf/2309.16609
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu
cs.CL
59 pages, 5 figures
null
cs.CL
20230928
20230928
[ { "id": "2305.20050" }, { "id": "2108.07258" }, { "id": "2306.09212" }, { "id": "2203.15556" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "1911.02116" }, { "id": "2306.03901" }, { "id": "2204.06745" }, { "id": "2309.05653" }, { "id": "2111.10952" }, { "id": "2305.14233" }, { "id": "2306.08568" }, { "id": "2305.14314" }, { "id": "2305.06500" }, { "id": "2306.15595" }, { "id": "2305.18290" }, { "id": "2302.04761" }, { "id": "2103.03874" }, { "id": "2305.10403" }, { "id": "1910.03771" }, { "id": "2211.05100" }, { "id": "2009.03300" }, { "id": "2307.13528" }, { "id": "1710.05941" }, { "id": "2108.07732" }, { "id": "2210.17323" }, { "id": "2304.02015" }, { "id": "2305.14688" }, { "id": "2306.07906" }, { "id": "2110.14168" }, { "id": "2306.14824" }, { "id": "2303.17580" }, { "id": "2308.12950" }, { "id": "2210.02414" }, { "id": "2308.10848" }, { "id": "2301.03988" }, { "id": "2302.13971" }, { "id": "2208.07339" }, { "id": "2308.09583" }, { "id": "2112.09332" }, { "id": "2308.00352" }, { "id": "2309.00986" }, { "id": "2304.14178" }, { "id": "2110.08207" }, { "id": "1909.08053" }, { "id": "2305.08322" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2305.16291" }, { "id": "2204.05862" }, { "id": "2210.03629" }, { "id": "2303.14742" }, { "id": "2306.17492" }, { "id": "2004.05150" }, { "id": "1907.11692" }, { "id": "2106.09685" }, { "id": "2304.01196" }, { "id": "1707.06347" }, { "id": "1810.04805" }, { "id": "2204.02311" }, { "id": "2305.03047" }, { "id": "2304.10453" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2303.03378" }, { "id": "2303.17760" }, { "id": "2212.08073" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2112.11446" }, { "id": "2109.00859" }, { "id": "2309.00071" }, { "id": "2103.10360" }, { "id": "2210.09261" }, { "id": "1711.05101" }, { "id": "2112.00861" }, { "id": "2305.10250" }, { "id": "2006.16668" }, { "id": "2104.09864" }, { "id": "2002.05202" }, { "id": "2309.04658" } ]
2309.15088
1
{rpradeep, sahel.sharifymoghaddam, jimmylin}@uwaterloo.ca # Abstract Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields exper- imental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky founda- tions. To address this significant shortcom- ing, we present RankVicuna, the first fully open-source LLM capable of performing high- quality listwise reranking in a zero-shot set- ting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT3.5 with a much smaller 7B parameter model, although our ef- fectiveness remains slightly behind reranking with GPT4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https: //github.com/castorini/rank_llm. # Introduction
2309.15088#1
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
2
# Introduction The widespread availability of instruction fine- tuned large language models (LLMs) has led to an explosion of applications in various natural lan- guage processing and information retrieval tasks. In the context of text retrieval, we have seen multi- ple efforts focused on zero-shot listwise reranking using LLMs (Sun et al., 2023; Ma et al., 2023), but unfortunately, to date, they have all relied on proprietary models. While such models support rapid prototyping, particularly when exposed as API endpoints, the reproducibility of experimental results that build on them is suspect—both from the normative perspective of what is “good science” and the practical perspective of obtaining reliable and deterministic measurements of experimental results. It would, of course, be desirable for the community to have access to a fully open-source LLM and associated code infrastructure capable of performing high-quality reranking.
2309.15088#2
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
3
results. It would, of course, be desirable for the community to have access to a fully open-source LLM and associated code infrastructure capable of performing high-quality reranking. RankVicuna provides exactly this: To our knowl- edge, we present the first open-source large lan- guage model for zero-shot listwise document reranking. Experimental validation on test collec- tions from the TREC 2019 and 2020 Deep Learning Tracks (Craswell et al., 2020, 2021) shows that the effectiveness of our model is on par with zero-shot reranking using GPT3.5, but slightly worse than reranking with GPT4. However, we can achieve these results with a much smaller model with only 7B parameters while still constrained to a GPT3.5 teacher. We share our model checkpoints and asso- ciated code, providing a valuable resource for the research community.
2309.15088#3
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
4
During the process of building RankVicuna, we have gained several important insights that we share: First, we confirm that proprietary LLMs are indeed effective at reranking in a zero-shot man- ner (Sun et al., 2023; Ma et al., 2023), although they exhibit several shortcomings. Beyond the obvi- ous issue of non-reproducibility, results from these models are also non-deterministic, which makes them unreliable for rigorous scientific research. Ad- ditionally, proprietary LLMs occasionally fail to follow the requested format in their responses. In contrast, RankVicuna is open-source, deterministic, and always generates well-formed responses. Second, we examine the impact of first-stage retrieval methods on downstream reranking effec- tiveness and find that RankVicuna consistently im- proves over the baseline retrieved results. We also find that with an effective first-stage retriever, even a single pass with reranking only the top 20 candi- dates brings an improvement similar to reranking the top 100 candidates. # ∗ Equal contribution. Finally, our experiments shed some light on the importance of training strategies that involve data
2309.15088#4
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
5
# ∗ Equal contribution. Finally, our experiments shed some light on the importance of training strategies that involve data augmentation to ensure model robustness against shuffled candidates or variations in initial retrieval quality. However, we note that data augmenta- tion techniques affect the quality of model out- puts under “ideal” conditions, and thus we face an effectiveness–robustness tradeoff. Our work lays a solid foundation for future re- search. By making our models and infrastructure available to the public, we hope to stimulate further exploration and innovation in reranking. We an- ticipate that our findings will guide researchers in developing more effective and efficient reranking models. As the demand for accurate and reliable information retrieval systems continues to grow in this age of retrieval-augmented LLMs, we expect our work to contribute to future advances. # 2 Background and Related Work Given a corpus C = {D1, D2, ..., Dn} containing a collection of documents and an information need expressed as a query q, the task of a retriever is to efficiently return a list of k documents from C that are most relevant to the query q according to some metric such as nDCG or average precision, where k ≪ |C|. The task of a reranker is to further im- prove the quality of the ranked list produced by the retriever or another upstream reranker, according to either the same or a different metric.
2309.15088#5
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
6
Retrievers and rerankers together form multi- stage ranking pipelines for text ranking, which have been studied in the context of transformer models (Nogueira et al., 2019; Gao et al., 2021) but date back well over a decade (Matveeva et al., 2006; Cambazoglu et al., 2010; Wang et al., 2011). Nogueira and Cho (2019) were the first to demon- strate the use of (encoder-only) transformer models for reranking (using BERT) with a simple cross- encoder architecture they called monoBERT. While neural rerankers had been explored extensively by researchers prior to the advent of BERT, the monoBERT model represented a significant ad- vance in effectiveness; see Lin et al. (2021b) for a historical overview. Following monoBERT, other researchers have explored reranking using decoder-only transformer models (Nogueira dos Santos et al., 2020) and full encoder–decoder models (Nogueira et al., 2020; Zhuang et al., 2022). These approaches are effec- tive but require copious amounts of training data in the form of (query, relevant passage) pairs; of- ten, the MS MARCO dataset (Bajaj et al., 2016)
2309.15088#6
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
7
is used for such purposes. Most of the early work on reranking with transformers can be character- ized as a pointwise approach, where the relevance of a particular candidate document is estimated independently of others. More recently, however, researchers have ad- dressed this shortcoming by incorporating pair- wise and listwise losses in their cross-encoder ap- proaches (Gao et al., 2021; Pradeep et al., 2022b; Zhuang et al., 2022). Using hard negatives in com- bination with such losses yields systems that are better at reranking in high-precision settings and that align more closely to the first-stage retriever.
2309.15088#7
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
8
In contrast, our work focuses on the zero-shot setting, where the model is not provided any task- specific supervised training (e.g., relevant query– passage pairs). We build on a recent thread of work (Sun et al., 2023; Ma et al., 2023; Qin et al., 2023) that directly uses LLMs as rerankers in a multi-stage ranking pipeline, primarily focusing on prompt engineering to accomplish the reranking task. We coin the term “prompt-decoders” (in con- trast to BERT-style cross-encoders) to characterize this class of rerankers. Furthermore, since these models are not fine-tuned or benefit from in-context learning, we might describe this type of reranking model as a zero-shot prompt-decoder. To use an open-source LLM as a prompt-decoder, Qin et al. (2023) adopted a pairwise approach since FLAN- UL2 is not capable of reordering a list of input documents. We find the same shortcoming to be also true for Vicuna, but we address this by using RankGPT3.5 as its teacher.
2309.15088#8
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
9
Rerankers depend on an upstream source to sup- ply candidate documents, which can be a first-stage retriever or another reranker. In all our experi- ments, we rely on a first-stage retriever to generate a candidate list of documents from the corpus. Re- searchers have explored a variety of sparse, dense, and hybrid retrieval techniques, but these are not the focus of our study. We refer interested readers to Lin (2021) and Lin et al. (2021b) for an overview of such models. In another relevant thread, recent work such as InPars (Bonifacio et al., 2022; Boytsov et al., 2023) and Promptagator (Dai et al., 2022) explored us- ing LLMs to generate synthetic queries for docu- ments to craft relevant query–document pairs as training data for retrievers or rerankers. Similarly, HyDE (Gao et al., 2023) used LLMs to augment queries by generating hypothetical documents for
2309.15088#9
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
10
unsupervised dense retrieval. Related, Sachan et al. (2023) proposed ART, a novel approach to train- ing a dense passage retriever starting only with questions, which outperforms the standard refer- ence dense retrieval model DPR (Karpukhin et al., 2020). In the emerging paradigm of generative retrieval, Pradeep et al. (2023) explored different document representation strategies and found syn- thetic queries to be necessary for effectiveness as the corpus size increases. However, all these ap- proaches take advantage of large language models indirectly. Finally, we note that rerankers have gained addi- tional prominence in recent months with the intro- duction of commercially available API endpoints. Examples include Cohere’s Rerank API1 and Mi- crosoft’s Semantic Search API in Azure Cognitive Search.2 The existence of these production services suggests that reranking models have attained ma- turity beyond explorations in research laboratories, and that rerankers address a real-world problem. # 3 Methods # 3.1 Prompt Design
2309.15088#10
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
11
# 3 Methods # 3.1 Prompt Design Recent work (Ma et al., 2023) has shown that zero- shot listwise LLM-based rerankers outperform their pointwise counterparts since the former can attend to multiple documents simultaneously to determine their relative positions in a relevance ranking. We build on this finding and define our ranking prob- lem as follows: Given a user query q and candidate documents {D1, . . . , Dn} from the previous stage, the task is to return a reordered list of the input doc- ument identifiers that improves a retrieval metric such as nDCG. Our prompt listwise reranking is similar to the RankGPT prompt (Sun et al., 2023), but accounts for differences between Vicuna and GPT; specifically, we use the default system description for Vicuna. In addition, we modified the prompt to show that the answer can, and in many cases should, deviate from the identity ordering, [1] > [2] > . . . > [m]. The exact input prompt to Vicuna is shown in Figure 1.
2309.15088#11
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
12
We prepend the prompt with the system descrip- tion, which, in Vicuna’s case, is “A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite an# 1https://cohere.com/rerank 2https://learn.microsoft.com/en-us/azure/search/ semantic-search-overview USER: I will provide you with {num} passages, each indicated by a numerical identifier []. Rank the passages based on their relevance to the search query: {query}. [1] {passage 1} [2] {passage 2} ... [{num}] {passage {num}} Search Query: {query}. Rank the {num} passages above based on their relevance to the search query. All the passages should be included and listed using identifiers, in descending order of relevance. The output format should be [] > [], e.g., [4] > [2]. Only respond with the ranking results, do not say any word or explain. Figure 1: User Input for both RankVicuna and our repli- cation of RankGPT. swers to the user’s questions.” We hope that align- ing our model with the exact prompt setup used to train Vicuna would help generate higher-quality ranked lists for our task. # 3.2 RankVicuna
2309.15088#12
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
13
# 3.2 RankVicuna We leveraged RankGPT3.5 as a teacher model for Vicuna to prompt-decode high-quality ranked lists. More specifically, we trained RankVicuna on the ranked lists generated by RankGPT3.5 for the 100K training set queries provided by Sun et al. (2023). To generate this dataset, the authors randomly sampled 100K queries from the MS MARCO v1 passage ranking training set and retrieved 20 candidates using BM25 for each query using Py- serini (Lin et al., 2021a). Then, these candidates were passed into RankGPT3.5 to generate teacher orderings, which we distill down to our student, RankVicuna. Since both RankGPT3.5 and Rank- Vicuna are not directly exposed to human-labeled relevant query–passage pairs, our approach can still be considered zero-shot. To ensure higher quality and more robust trained models, we took the following additional steps: • We did not train on malformed generations. More specifically, examples with incorrect list format- ting, missing document identifiers, or repetitions were excluded from the training set. This is im- portant as we find that about 12% of the outputs were malformed, and we desire a model that con- sistently generates a well-formed ordering.
2309.15088#13
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
14
• Besides including the original generations pro- vided by the teacher, which reranks the top 20 results by BM25 (Robertson and Zaragoza, 2009), we also include a condition where the input or- der is shuffled. Our hope is that this exposes the model to a more complex reordering task while not incurring additional data generation costs. However, we still retain the original BM25 input ordering, as we believe it is important to model “success”, given it is the closest to what the model sees during inference. All RankVicuna settings in the rest of the paper involve this data augmentation (DA) process unless specified. We trained our 7B parameter RankVicuna for two epochs with an effective batch size of 128 and a learning rate of 2 × 10−5 in bfloat16. Training took roughly 80 hours on four NVIDIA RTX A6000 GPUs. The Vicuna model that served as our initial weights can be found under lmsys/vicuna-7b-v1.5 in the HuggingFace Hub. This model is instruction fine-tuned from Meta’s LLaMA-v2 model (Touvron et al., 2023).
2309.15088#14
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
16
# 4 Experimental Setup To demonstrate the effectiveness of RankVicuna, we compared it with existing representative unsu- pervised ranking methods (BM25 and Contriever) as well as our replications of two closed-source prompt-decoder models: LRL (Ma et al., 2023) with GPT3.5 and RankGPT (Sun et al., 2023), with both GPT3.5 and GPT4, which we refer to as Rank- GPT3.5 and RankGPT4, respectively. GPT3.5 refers to the model dubbed gpt-3.5-turbo in the Open- AI suite while GPT4 refers to gpt-4. We also com- pared RankVicuna with our replication of PRP- Sliding-10 from Qin et al. (2023), albeit with Vi- cuna (7B parameters). For these experiments, we used Vicuna instead of FLAN-T5 or FLAN-UL2 because we wanted an apples-to-apples compari- son with the same base LLM. Additionally, we note that the FLAN mixture, used to pretrain the mod- els, includes the MS MARCO QA task,3 thereby rendering the results suspect from the perspective of zero-shot retrieval.
2309.15088#16
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
17
3https://github.com/google-research/FLAN/blob/ e9e4ec6e2701182c7a91af176f705310da541277/flan/ v2/flan_collection_info.csv#L1032 We evaluated our methods using test collections from the TREC 2019 and 2020 Deep Learning Tracks (Craswell et al., 2020, 2021), using query and relevance judgments from the passage retrieval tasks. These tasks use the MS MARCO v1 passage corpus (Bajaj et al., 2016), which contains 8.8 mil- lion passages. For convenience, we refer to these datasets as DL19 and DL20. We report effective- ness in terms of nDCG@10 and average precision at a rank cutoff of 100 (denoted MAP@100). The context size is 4096 for Vicuna and GPT3.5 and 8192 for GPT4. To reorder the top 100 can- didates for each query given these context sizes, we used a sliding window similar to RankGPT and LRL. In our experiments, we have adopted the same values as RankGPT (window size 20, stride 10) to isolate the impact of window and stride size in our comparisons.
2309.15088#17
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
18
Unlike RankVicuna, we (surprisingly) observe non-deterministic outputs for GPT3.5 and GPT4, even with a temperature of zero. For these two models, we report the mean over six and three runs, respectively, with 99% confidence intervals. We limited the number of GPT4 runs to three due to our computation budget. In all our reranking experiments, we replaced any reference of the form [n] in the passages with (n) to avoid confusing the models. We also lever- aged ftfy’s fix_text method to preprocess any input sent to the rerankers. # 5 Results
2309.15088#18
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
19
# 5 Results Table 1 compares different reranking pipelines us- ing data from DL19 and DL20. Rows (1) and (2) report baselines using two first-stage retrievers, BM25 and Contriever (Izacard et al., 2021). The remaining rows (besides the last one) report the results of using zero-shot LLM rerankers to reorder top 100 candidate documents retrieved by BM25. Rows (6) and (7) show scores of two variants of PRP-Sliding-10, FLAN-T5-XXL and FLAN-UL2, directly copied from Qin et al. (2023). The final row represents our best system, where we apply RankVicuna to rerank the top 100 candidates gener- ated by SPLADE++ EnsembleDistil (Formal et al., 2021), a state-of-the-art neural first-stage sparse retrieval method.
2309.15088#19
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
20
As expected, all LLM rerankers outperform the baseline (first-stage) methods. The effectiveness of RankVicuna, with 7B parameters, is on par with the effectiveness of RankGPT3.5, with 175B paSource DL19 DL20 Prev. Top-k nDCG@10 MAP@100 nDCG@10 MAP@100 (1) BM25 (2) Contriever None None |C| 0.5058 |C| 0.6164 0.2476 0.3163 0.4796 0.5986 0.2685 0.3309 (3) LRL (GPT3.5) BM25 100 0.6451±0.003 0.3035±0.004 0.6099±0.004 0.3496±0.004 (4) RankGPT3.5 (5) RankGPT4 BM25 BM25 100 0.6855±0.006 100 0.7500±0.002 0.3335±0.002 0.3703±0.004 0.6202±0.005 0.7036±0.004 0.3525±0.002 0.4134±0.004 (6) PRP-Sliding-10 (FLAN-T5-XXL) (7) PRP-Sliding-10
2309.15088#20
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
22
Table 1: nDCG@10 and MAP@100 on DL19 and DL20 for different reranking pipelines, with BM25 and Contriever as baselines. Each reranker uses the top 100 retrieved results of the previous stage as input. Rows (3–4) and row (5) represent averages of six and three runs, respectively. We directly copied results in rows (6–7) from Qin et al. (2023). All other results are from our own experiments. OK Wrong Format Repetition Missing Total RankGPT3.5 RankGPT4 RankVicuna 838.67 830.33 873 0 40.67 0 1.16 1.67 0 33.16 0.33 0 873 873 873 Table 2: The number of malformed responses for each reranking method. Reported numbers for RankGPT3.5 and RankGPT4 are averages of three and six runs, respectively. rameters. Specifically, compared to its teacher RankGPT3.5, RankVicuna achieves higher scores on DL20 but slightly lower scores on DL19. Com- pared with another zero-shot reranking method, LRL, which uses RankGPT3.5, RankVicuna demon- strates considerably higher effectiveness on both DL19 and DL20.
2309.15088#22
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
23
We note that PRP-Sliding-10 (FLAN-T5-XXL) with 11B parameters is comparable to RankVicuna both in terms of model size and effectiveness. Other than being fully open-source, our main ad- vantage over PRP-Sliding-10 (FLAN-T5-XXL) is the prompt cost: to bring the top 10 most relevant candidates to the top of the list, PRP-Sliding-10 (FLAN-T5-XXL) requires each passage to be in- cluded in ∼40 prompts on average. In contrast, we only require two prompts for our listwise ap- proach with a sliding window of size 20 and a stride of 10. Furthermore, training on the FLAN mixture, which includes the MS MARCO QA task, calls into question the validity of PRP-Sliding-10 (FLAN-T5-XXL) as a true zero-shot method. We suspect this to be a contributing factor to the effec- tiveness gap between PRP-Sliding-10 (FLAN-T5- XXL) and PRP-Sliding-10 (Vicuna).
2309.15088#23
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
24
10 (FLAN-T5-UL2) with 20B parameters outper- form RankVicuna. This could be because, in ad- dition to the differences in model sizes, the effec- tiveness of RankVicuna is bounded by its teacher, RankGPT3.5. Finally, in row (10), we used RankVicuna to rerank the top 100 candidates from SPLADE++ EnsembleDistil instead of BM25. This combina- tion achieves effectiveness on par with RankGPT4 with an open-source model that is more than two orders of magnitude smaller. Table 2 shows the number of malformed re- sponses generated by the RankGPT variants and RankVicuna, which we have grouped into the fol- lowing categories: 1. Wrong Format: includes responses that do not follow the requested format. For example, when RankGPT4 refuses to generate a sorted list, its response falls into this category. 2. Repetition: includes responses that contain re- peated document ids. 3. Missing: includes responses with missing docu- ment ids. Not surprisingly, both RankGPT4 (rumored to contain more than 1T parameters) and PRP-SlidingSince RankVicuna is deterministic, we report the results of a single run. For every request in this
2309.15088#24
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
25
Source DL19 DL20 Prev. Top-k nDCG@10 MAP@100 nDCG@10 MAP@100 (1a) BM25 (1b) RankVicuna (1c) RankVicuna None BM25 BM25 |C| 0.5058 20 0.6164 100 0.6682 0.2476 0.2867 0.3316 0.4796 0.5986 0.6549 0.2685 0.3194 0.3789 (2a) BM25 + RM3 (2b) RankVicuna (2c) RankVicuna None BM25 + RM3 BM25 + RM3 |C| 0.5216 20 0.6053 100 0.6588 0.2807 0.3110 0.3573 0.4896 0.5825 0.6567 0.2821 0.3323 0.3991 (3a) OpenAI ada2 (3b) RankVicuna (3c) RankVicuna None OpenAI ada2 OpenAI ada2 |C| 0.7035 20 0.7448 100 0.7374 0.4151 0.4398 0.4409 0.6759 0.7101 0.7210 0.4587 0.4718 0.4755 (4a) DistillBERT KD TASB (4b)
2309.15088#25
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
26
0.7101 0.7210 0.4587 0.4718 0.4755 (4a) DistillBERT KD TASB (4b) RankVicuna (4c) RankVicuna |C| 0.7210 None DistillBERT KD TASB 20 0.7588 DistillBERT KD TASB 100 0.7551 0.4050 0.4121 0.4170 0.6854 0.7404 0.7049 0.4520 0.4648 0.4620 (5a) SPLADE++ ED (5b) RankVicuna (5c) RankVicuna None SPLADE++ ED SPLADE++ ED |C| 0.7308 20 0.7532 100 0.7459 0.4464 0.4491 0.4416 0.7197 0.7455 0.7473 0.4826 0.5150 0.5183
2309.15088#26
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
27
Table 3: nDCG@10 and MAP@100 for RankVicuna with different first-stage candidate generation methods. For each method, reranking is performed using the top 20 or 100 candidates. run, RankVicuna returned a correctly formatted response. In contrast, for RankGPT3.5 and Rank- GPT4, we averaged the results of six and three runs, respectively. Both RankGPT methods occa- sionally return malformed responses. Most of the malformed responses from RankGPT3.5 are miss- ing documents in the ordered list; when malformed, RankGPT4 mostly refuses to rank. Repetition is a rare problem for both RankGPT methods. # 6 Ablation Studies candidates improves effectiveness by 30%–45% for all metrics, the improvement for SPLADE++ ED is only 2%–4% for the same metrics. This is a commonly noted phenomenon across multi-stage ranking systems (Pradeep et al., 2021, 2022b,a). Comparing top 20 vs. top 100 results shows that reranking more candidates generally results in a higher MAP@100. However, in cases where the first-stage effectiveness is “good enough”, rows (3– 5) for DL19 and rows (4–5) for DL20, reranking only the top 20 candidates achieves an nDCG@10 score on par with reranking the top 100 candidates.
2309.15088#27
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
28
# 6.1 First-Stage Candidate Generation To evaluate the impact of the quality and quan- tity of the generated candidates on the final results, we repeated our experiments with the following five first-stage retrieval methods using either top 20 or top 100 retrieved results: (1) BM25 (Robert- son and Zaragoza, 2009), (2) BM25+RM3 (Abdul- Jaleel et al., 2004), (3) OpenAI ada2 (Neelakantan et al., 2022; Lin et al., 2023), (4) DistillBERT KD TASB (Hofstätter et al., 2021), (5) SPLADE++ En- sembleDistil (ED) (Formal et al., 2022). The first two represent strong traditional “bag-of-words” re- trieval baselines; the others represent a sample of effective neural first-stage retrievers that are com- monly seen in research studies today. OpenAI ada2 and DistillBERT KD TASB are dense retrieval methods, while SPLADE++ ED is a sparse one. Our experiment shows that as the first-stage effectiveness increases, additional improvements from RankVicuna decrease (see Table 3). For ex- ample, while RankVicuna over the top 100 BM25 # 6.2 Data Augmentation
2309.15088#28
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
29
# 6.2 Data Augmentation Section 3.2 discussed the training process of Rank- Vicuna, highlighting the use of data augmentation (DA) as a crucial step in our training pipeline. To recap, the DA process involves shuffling the input order of the documents and permuting the origi- nal generations provided by the teacher. This step exposes the model to a more complex reordering task, which hopefully enhances its robustness and effectiveness. In this section, we study the dependence of Rank- Vicuna on the order of generated candidates. We compared two versions of the model: (1) the default version trained using Data Augmentation (DA), and (2) a variant trained without DA. Experimental results are shown in Table 4. Using BM25 as the first stage, our experiments show that RankVicuna without DA results in worse effectiveness than using RankVicuna with DA.
2309.15088#29
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
30
Source DL19 DL20 Prev. Top-k nDCG@10 MAP@100 nDCG@10 MAP@100 (1a) RankVicuna (1b) RankVicuna (1c) RankVicuna (1d) RankVicuna BM25 Shuf. BM25 SPLADE++ ED Shuf. SPLADE++ ED 100 0.6682 100 0.6702±0.009 100 0.7459 100 0.7271±0.009 0.3316 0.2977±0.006 0.4416 0.3860±0.008 0.6549 0.6537±0.006 0.7473 0.7071±0.007 0.3789 0.3553±0.006 0.5183 0.4312±0.006 (2a) RankVicuna (w/o DA) (2b) RankVicuna (w/o DA) (2c) RankVicuna (w/o DA) (2d) RankVicuna (w/o DA) BM25 Shuf. BM25 SPLADE++ ED Shuf. SPLADE++ ED 100 0.6612 100 0.5893±0.017 100 0.7653 100 0.5893±0.010 0.3254 0.2666±0.011 0.4672
2309.15088#30
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
32
Table 4: nDCG@10 and MAP@100 of two variants of RankVicuna with different first-stage candidate generation methods. For each method, reranking is performed using top 100 candidates from the previous step on six shuffled orderings. We report average metrics and with 99% confidence intervals. 0.7 0 1 @ G C D n 0.6 RankVicuna on DL19 PRPVicuna on DL19 RankVicuna on DL20 PRPVicuna on DL20 0.5 0 1 7 2 8 5 Number of Sliding Window Passes 3 4 6 9 10 Figure 2: Comparing the effectiveness of RankVicuna vs. PRPVicuna on DL19 and DL20, varying the number of times the ranked list is progressively refined. The zeroth pass corresponds to the BM25 run. When we replace BM25 with SPLADE++ ED, RankVicuna without DA outperforms RankVicuna with DA. While data augmentation can cause a small drop in effectiveness (depending on the first stage), it makes the model less vulnerable to poor quality candidates (whether intentional or not), as shown by Qin et al. (2023) in methods like PRP- Sliding-10 and RankGPT3.5.
2309.15088#32
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
33
represents the number of sliding window passes, ranging from 0 to 10, and the y-axis represents the nDCG@10 score. We plot four curves, each repre- senting a combination of a reranking method and a dataset. The solid lines show results on DL19 and the dashed lines show results on DL20. The blue lines represent the RankVicuna method and the red lines represent the PRPVicuna method (Qin et al., 2023). To showcase this vulnerability, we provided both model variants with shuffled candidate documents (rows b and d). The results show that the model without DA exhibited a significant effectiveness drop (up to 34%) and higher variance among dif- ferent runs. In contrast, the default model, which is more robust due to its exposure to a more complex reordering task, better retained its effectiveness (comparing rows b vs. a and d vs. c, respectively, for each version).
2309.15088#33
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
34
We see that, for both datasets, RankVicuna con- sistently outperforms PRPVicuna. The nDCG@10 score for RankVicuna on DL19 starts at 0.5058 and increases to 0.6837 at the second pass, remaining relatively stable thereafter. The score for Rank- Vicuna on DL20 follows a similar pattern, starting at 0.4796 and rising to about 0.6604 at pass four, albeit at a slower pace after the first pass. On the other hand, the nDCG@10 scores for PRPVicuna on both datasets increase gradually with each pass but remain far below RankVicuna. # 6.3 Effect of Progressive Reranking Finally, Figure 2 compares the effectiveness of two reranking methods, RankVicuna and a variant of PRP-Sliding from Qin et al. (2023), we call PRPVi- cuna, on two datasets, DL19 and DL20. The x-axis This plot suggests that RankVicuna is more ef- fective than PRPVicuna and that multiple passes of the sliding window have a minimal impact as an effectiveness boost for RankVicuna. It is also
2309.15088#34
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
35
worth noting that a single pass of reranking with both methods takes about the same time, around 30 seconds per query using a batch size of one on an RTX A6000 GPU. These results show that RankVicuna is much more efficient and achieves quicker convergence to the best possible results. This is likely because PRPVicuna handles only two passages at a time, whereas RankVicuna attends to 20 passages simultaneously, resulting in more effective relevance estimation. # 7 Conclusion In this study, we introduce RankVicuna, a listwise zero-shot reranking approach powered by an open- source large language model, Vicuna. Experimen- tal studies show that our model achieves effective- ness on par with much larger models. We also quan- titatively demonstrated the stability of RankVicuna results compared to closed-source counterparts. Along the way, we explored many aspects of prompt-decoder models for reranking, including the impact of first-stage retrievers on downstream effectiveness. Our work also sheds light on the importance of data augmentation for system robust- ness, which plays a vital role in ensuring stability in the face of document shuffling and variations in initial retrieval quality.
2309.15088#35
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
36
In summary, RankVicuna advances zero-shot reranking for information retrieval, demonstrating the potential of large language models to enhance search effectiveness, even in data-scarce settings. We are able to achieve high-quality reranking using fully open-source models, which provides a firm foundation for the rest of the research community to build on. As we further refine and expand these techniques, we anticipate exciting opportunities for integrating large language models into end-to-end information access applications. # Acknowledgments This research was supported in part by the Nat- ural Sciences and Engineering Research Council (NSERC) of Canada. # References Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fer- nando Diaz, Leah Larkey, Xiaoyan Li, Donald Met- zler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004), Gaithersburg, Maryland.
2309.15088#36
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
37
Thirteenth Text REtrieval Conference (TREC 2004), Gaithersburg, Maryland. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:arXiv:1611.09268v3. Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. 2022. InPars: Unsupervised dataset generation for information retrieval. In Pro- ceedings of the 45th International ACM SIGIR Con- ference on Research and Development in Information Retrieval (SIGIR 2022), pages 2387–2392, Madrid, Spain. Leonid Boytsov, Preksha Patel, Vivek Sourabh, Riddhi Nisar, Sayani Kundu, Ramya Ramanathan, and Eric Nyberg. 2023. InPars-Light: Cost-effective unsuper- vised training of efficient rankers. arXiv:2301.02998.
2309.15088#37
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
38
B. Barla Cambazoglu, Hugo Zaragoza, Olivier Chapelle, Jiang Chen, Ciya Liao, Zhaohui Zheng, and Jon De- genhardt. 2010. Early exit optimizations for additive machine learned ranking systems. In Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM 2010), pages 411– 420, New York, New York. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820. Zhuyun Dai, Vincent Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, and Ming-Wei Chang. 2022. Promptaga- tor: Few-shot dense retrieval from 8 examples. arXiv:2209.11755.
2309.15088#38
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
39
Thibault Formal, Carlos Lassance, Benjamin Pi- wowarski, and Stéphane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for informa- tion retrieval. arXiv:2109.10086. Thibault Formal, Carlos Lassance, Benjamin Pi- wowarski, and Stéphane Clinchant. 2022. From dis- tillation to hard negative sampling: Making sparse neural ir models more effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), page 2353–2359, Madrid, Spain. Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. Re- think training of BERT rerankers in multi-stage re- trieval pipeline. In Proceedings of the 43rd European Conference on Information Retrieval (ECIR 2021). Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2023. Precise zero-shot dense retrieval without rel- In Proceedings of the 61st Annual evance labels. Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1762–1777, Toronto, Canada.
2309.15088#39
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
40
Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Ef- ficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 2021), pages 113–122. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. arXiv:2112.09118. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Jimmy Lin. 2021. A proposed conceptual framework for a representational approach to information re- trieval. arXiv:2110.01529.
2309.15088#40
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
41
Jimmy Lin. 2021. A proposed conceptual framework for a representational approach to information re- trieval. arXiv:2110.01529. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356–2362. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021b. Pretrained Transformers for Text Ranking: BERT and Beyond. Morgan & Claypool Publishers. Jimmy Lin, Ronak Pradeep, Tommaso Teofili, and Jasper Xian. 2023. Vector search with OpenAI em- beddings: Lucene is all you need. arXiv:2308.14963. Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and listwise docu- Zero-shot reranking with a large language model. Jimmy Lin. 2023. ment arXiv:2305.02156.
2309.15088#41
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
42
Irina Matveeva, Chris Burges, Timo Burkard, Andy Lau- cius, and Leon Wong. 2006. High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006), pages 437–444, Seattle, Washington. Arvind Neelakantan, Tao Xu, Raul Puri, Alec Rad- ford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and code embeddings by con- trastive pre-training. arXiv preprint arXiv: Arxiv- 2201.10005. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085.
2309.15088#42
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
43
Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- In Findings trained sequence-to-sequence model. of the Association for Computational Linguistics: EMNLP 2020, pages 708–718. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv:1910.14424. Cicero Nogueira dos Santos, Xiaofei Ma, Ramesh Nalla- pati, Zhiheng Huang, and Bing Xiang. 2020. Beyond [CLS] through ranking by generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1722–1727, Online. Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang, Jimmy Lin, Donald Metzler, and Vinh Q. Tran. 2023. How does generative retrieval scale to millions of passages? arXiv:2305.11841.
2309.15088#43
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
44
Ronak Pradeep, Yilin Li, Yuetong Wang, and Jimmy Lin. 2022a. Neural query synthesis and domain-specific ranking templates for multi-stage clinical trial match- ing. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), pages 2325– 2330, Madrid, Spain. Ronak Pradeep, Yuqi Liu, Xinyu Zhang, Yilin Li, An- drew Yates, and Jimmy Lin. 2022b. Squeezing water from a stone: A bag of tricks for further improving cross-encoder effectiveness for reranking. In Pro- ceedings of the 44th European Conference on Infor- mation Retrieval (ECIR 2022), Part I, pages 655–670, Stavanger, Norway. Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv:2101.05667.
2309.15088#44
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
45
Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Ben- dersky. 2023. Large language models are effec- tive text rankers with pairwise ranking prompting. arXiv:2306.17563. Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends in Information Re- trieval, 3(4):333–389. Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, and Manzil Zaheer. 2023. Questions are all you need to train a dense passage retriever. Transactions of the Association for Computational Linguistics, 11:600–616. Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv:2304.09542.
2309.15088#45
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
47
Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schel- ten, Ruan Silva, Eric Michael Smith, Ranjan Sub- ramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv: 2307.09288.
2309.15088#47
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.15088
48
Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011), pages 105–114, Beijing, China. Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2022. RankT5: Fine- tuning T5 for text ranking with ranking losses. arXiv:2210.10634.
2309.15088#48
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. To address this significant shortcoming, we present RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT-3.5 with a much smaller 7B parameter model, although our effectiveness remains slightly behind reranking with GPT-4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https://github.com/castorini/rank_llm.
http://arxiv.org/pdf/2309.15088
Ronak Pradeep, Sahel Sharifymoghaddam, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20230926
20230926
[ { "id": "2301.02998" }, { "id": "2110.01529" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2109.10086" }, { "id": "2210.10634" }, { "id": "2209.11755" }, { "id": "2101.05667" }, { "id": "2305.11841" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2306.17563" }, { "id": "2308.14963" }, { "id": "2112.09118" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2003.07820" } ]
2309.14525
0
3 2 0 2 p e S 5 2 ] V C . s c [ 1 v 5 2 5 4 1 . 9 0 3 2 : v i X r a Preprint ALIGNING LARGE MULTIMODAL MODELS WITH FACTUALLY AUGMENTED RLHF Zhiqing Sun∗♠, Sheng Shen∗♣, Shengcao Cao∗ ♢ Haotian Liu♡, Chunyuan Li♮, Yikang Shen△, Chuang Gan†∇△, Liang-Yan Gui†♢ Yu-Xiong Wang†♢, Yiming Yang†♠, Kurt Keutzer†♣, Trevor Darrell†♣ ♣UC Berkeley, ♠CMU, ♢UIUC, ♡UW–Madison, ∇UMass Amherst ♮Microsoft Research, △MIT-IBM Watson AI Lab # ABSTRACT
2309.14525#0
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
1
Large Multimodal Models (LMM) are built across modalities and the misalign- ment between two modalities can result in “hallucination”, generating textual out- puts that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pin- point the more hallucinated one, and the vision-language model is trained to max- imize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with addi- tional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image- text pairs to improve the general capabilities of our model. To evaluate the pro- posed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing
2309.14525#1
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
2
To evaluate the pro- posed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improve- ment by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
2309.14525#2
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
3
# INTRODUCTION Large Language Models (LLMs; Brown et al. (2020); Chowdhery et al. (2022); OpenAI (2023)) can delve into the multimodal realm either by further pre-training with image-text pairs (Alayrac et al.; Awadalla et al., 2023) or by fine-tuning them with specialized vision instruction tuning datasets (Liu et al., 2023a; Zhu et al., 2023), leading to the emergence of powerful Large Multimodal Models (LMMs). Yet, developing LMMs faces challenges, notably the gap between the volume and quality of multimodal data versus text-only datasets. Consider the LLaVA model (Liu et al., 2023a), which is initialized from a pre-trained vision encoder (Radford et al., 2021) and an instruction-tuned language model (Chiang et al., 2023). It is trained on just 150K synthetic image-based dialogues, which is much less in comparison to the text-only models (Flan (Longpre et al., 2023) utilizing over 100M examples spanning 1800 tasks. Such limitations in data can lead to misalignment between the vision and language modalities. Consequently, LMMs may produce hallucinated outputs, which are not accurately anchored to the context provided by images.
2309.14525#3
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
4
To mitigate the challenges posed by the scarcity of high-quality visual instruction tuning data for LMM training, we introduce LLaVA-RLHF, a vision-language model trained for improved mul- timodal alignment. One of our key contributions is the adaptation of the Reinforcement Learning from Human Feedback (RLHF) (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a), a general and scalable alignment paradigm that shows great success for text-based AI agents, to the ∗Equal contribution. Ordering is determined by dice rolling. †Equal advising. 1 Preprint Question: Where is this photo taken?
2309.14525#4
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
5
Ground Truth: From the slogan “Welcome to Houston, Texas”, we can infer that this photo is taken at an airport in Houston, Texas. LLaVA: This photo is taken at an airport. LLaVA-SFT+: This photo is taken at the Houston airport. LLaVA-RLHF: This photo is taken in the baggage claim area of an airport, specifically in the lobby of the George Bush Intercontinental Airport in Houston, Texas. The large sign in the background indicates the airport’s name and location. The presence of multiple people, handbags, and suitcases in the scene further supports the fact that this is indeed an airport setting. Question: What is the gender of the person riding the motorcycle in the image?
2309.14525#5
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
6
Ground Truth: No one is riding the motorcycle in the image. LLaVA: The person riding the motorcycle in the image is a woman. LLaVA-SFT+: The person riding the motorcycle in the image is a man. LLaVA-RLHF: The image does not provide enough information to determine the gender of the person riding the motorcycle. It only shows a motorcycle parked on the side of the road, and there is no indication of the rider’s gender. The rider could be a man or a woman, or even someone who identifies as non- binary or gender non-conforming. It is important to avoid making assumptions about a person’s gender based on visual cues alone, as these assumptions can be inaccurate and potentially harmful. Table 1: Qualitative examples to illustrate the effect of RLHF on Large Multimodal Model. LLaVA- RLHF is presented to be more helpful (above) and less hallucinated (bottom).
2309.14525#6
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
7
multimodal alignment for LMMs. By collecting human preferences with an emphasis on detect- ing hallucinations1, and utilizes those preferences in reinforcement learning for LMM fine-tuning (Ziegler et al., 2019; Stiennon et al., 2020). This approach can improve the multimodal alignment with a relatively low annotation cost, e.g., collecting 10K human preferences for image-based con- versations with $3000. To the best of our knowledge, this approach is the first successful adaptation of RLHF to multimodal alignment.
2309.14525#7
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
8
A potential issue with the current RLHF paradigm is called reward hacking, which means achieving high scores from the reward model does not necessarily lead to improvement in human judgments. To prevent reward hacking, previous work (Bai et al., 2022a; Touvron et al., 2023b) proposed to iteratively collect “fresh” human feedback, which tends to be costly and cannot effectively utilize existing human preference data. In this work, we propose a more data-efficient alternative, i.e., we try to make the reward model capable of leveraging existing human-annotated data and knowledge in larger language models. Firstly, we improve the general capabilities of the reward model by using a better vision encoder with higher resolutions and a larger language model. Secondly, we introduce a novel algorithm named Factually Augmented RLHF (Fact-RLHF), which calibrates the reward signals by augmenting them with additional information such as image captions or ground-truth multi-choice option, as illustrated in Fig. 1. 1We instructed crowdworkers to prioritize the responses that exhibit better multimodal alignment and min- imize hallucinations. That is, if two responses are free of hallucinations, the crowdworkers were asked to choose/create a more helpful one. 2 Preprint
2309.14525#8
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
9
2 Preprint (a) Misaligned Supervised Fine-Tuning (SFT) Data contains Hallucination 8. LMM-SFT ‘A: The sleeping environment on the couch provides the cat with a comfortable and cozy space to rest. >ao-> Output (A) is better Human with less hallucinations. A: The cat is resting on a black couch with its front paws tucked under its chest. (b) Collect Human Preference (More Helpful & Less Hallucinated) Data for Reward Models (RM) [The sign is not very clear, so perhaps] A: American Fast Food LMM-RLHF LMM-RM Q: What is in the image? | Javier's Tacos — Mexican Fast Food — Open 24 hours [The RL mode''s output is clearly contradictory to the image captions} Reward Score: 0.0 (c) Factually Augmented Reinforcement Learning from Human Feedback (Fact-RLHF) Figure 1: Illustration of how hallucination may occur during the Supervised Fine-Tuning (SFT) phase of LMM training and how Factually Augmented RLHF alleviates the issue of limited capacity in the reward model which is initialized from the SFT model.
2309.14525#9
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
10
To improve the general capabilities of LMMs during the Supervised Fine-Tuning (SFT) stage, we further augment the synthetic vision instruction tuning data (Liu et al., 2023a) with existing high- quality human-annotated multi-modal data in the conversation format. Specifically, we convert VQA-v2 (Goyal et al., 2017a) and A-OKVQA (Schwenk et al., 2022) into a multi-round QA task, and Flickr30k (Young et al., 2014b) into a Spotting Captioning task (Chen et al., 2023a), and train the LLaVA-SFT+ models based on the new mixture of data.
2309.14525#10
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
11
Lastly, we look into assessing the multimodal alignment of LMMs in real-world generation scenar- ios, placing particular emphasis on penalizing any hallucinations. We create a set of varied bench- mark questions that cover the 12 main object categories in COCO (Lin et al., 2014) and include 8 dif- ferent task types, leading to MMHAL-BENCH. Our evaluation indicates that this benchmark dataset aligns well with human evaluations, especially when scores are adjusted for anti-hallucinations. In our experimental evaluation, as the first LMM trained with RLHF, LLaVA-RLHF delivers impres- sive outcomes. We observed a notable enhancement on LLaVA-Bench, achieving 94%, an improve- ment by 60% in MMHAL-BENCH, and established new performance benchmarks for LLaVA with a 52.4% score on MMBench (Liu et al., 2023b) and an 82.7% F1 on POPE (Li et al., 2023d). We have made our code, model, and data publicly available at https://llava-rlhf.github.io. 3 # Preprint
2309.14525#11
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
12
3 # Preprint Instruction We have developed an AI assistant adept at facilitating image-based conversations. However, it oc- casionally generates what we call hallucinations, which are inaccuracies unsupported by the image content or real-world knowledge. In this task, we request that you select the most appropriate response from the AI model based on the conversation context. When making this selection, primarily consider these two factors: • Honesty: Fundamentally, the AI should provide accurate information and articulate its uncer- tainty without misleading the user. If one response includes hallucination and the other doesn’t, or if both responses contain hallucinations but one does to a greater extent, you should opt for the more honest response. • Helpfulness: In scenarios where both responses are free from hallucinations, you should opt for the more helpful one. The AI should attempt to accomplish the task or answer the question posed, provided it’s not harmful, in the most helpful and engaging manner possible. Annotation Task Please select the better response from A and B [IMAGE] [CONVERSATION CONTEXT] [RESPONSE A] [RESPONSE B] Question 1: Which response has fewer hallucinations in terms of the given image? Question 2: If you have selected a tie between Response 1 and Response 2 from the previous question, which response would be more helpful or less incorrect? Table 2: The instruction to the crowdworkers for human preference collection.
2309.14525#12
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
13
Table 2: The instruction to the crowdworkers for human preference collection. 2 METHOD 2.1 MULTIMODAL RLHF Reinforcement Learning from Human Feedback (RLHF) (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a) has emerged as a powerful and scalable strategy for aligning Large Language Models (LLMs) with human values. In this work, we use RLHF to align LMMs. The basic pipeline of our multimodal RLHF can be summarized into three stages: Multimodal Supervised Fine-Tuning A vision encoder and a pre-trained LLM are jointly fine- tuned on an instruction-following demonstration dataset using token-level supervision to produce a supervised fine-tuned (SFT) model πSFT.
2309.14525#13
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
14
Multimodal Preference Modeling In this stage, a reward model, alternatively referred to as a preference model, is trained to give a higher score to the “better” response. The pairwise comparison training data are typically annotated by human annotators. Formally, let the aggregated preference data be represented as DRM = {(I, x, y0, y1, i)}, where I denotes the image, x denotes the prompt, y0 and y1 are two associated responses, and i indicates the index of the preferred response. The reward model employs a cross-entropy loss function: L(rθ) = −E(I,x,y0,y1,i)∼DRM [log σ(rθ(I, x, yi) − rθ(I, x, y1−i))] .
2309.14525#14
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
15
Reinforcement Learning Here, a policy model, initialized through multimodal supervised fine- tuning (SFT) (Ouyang et al., 2022; Touvron et al., 2023b), is trained to generate an appropriate response for each user query by maximizing the reward signal as provided by the reward model. To address potential over-optimization challenges, notably reward hacking, a per-token KL penalty derived from the initial policy model (Ouyang et al., 2022) is sometimes applied. Formally, given the set of collected images and user prompts, DRL = {(I, x)}, along with the fixed initial policy model πINIT and the RL-optimized model πRL 7", the full optimization loss is articulated as: x,y) — B- Dac (7 (y|Z, x) ||" (y|Z, L(mp) = -Ee)eDar.y~n Ph (ylz.e) [ro(Z, x,y) — B- Dac (7 (y|Z, x) ||" (y|Z, x) | in) where β is the hyper-parameter to control the scale of the KL penalty. 4 Preprint 2.2 AUGMENTING LLAVA WITH HIGH-QUALITY INSTRUCTION-TUNING
2309.14525#15
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
16
4 Preprint 2.2 AUGMENTING LLAVA WITH HIGH-QUALITY INSTRUCTION-TUNING Recent studies (Zhou et al., 2023; Touvron et al., 2023b) show that high-quality instruction tuning data is essential for aligning Large Language Models (LLMs). We find this becomes even more salient for LMMs. As these models traverse vast textual and visual domains, clear tuning instructions are crucial. Correctly aligned data ensures models produce contextually relevant outputs, effectively bridging language and visual gaps. For example, LLaVA synthesized 150k visual instruction data using the text-only GPT-4, where an image is represented as the associated captions on bounding boxes to prompt GPT-4. Though careful filtering has been applied to improve the quality, the pipeline can occasionally generate visually misaligned instruction data that can not be easily removed with an automatic filtering script, as highlighted in Table 1.
2309.14525#16
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
17
In this work, we consider enhancing LLaVA (98k conversations, after holding out 60k conversa- tions for preference modeling and RL training) with high-quality instruction-tuning data derived from existing human annotations. Specifically, we curated three categories of visual instruction data: “Yes” or “No” queries from VQA-v2 (83k) (Goyal et al., 2017b), multiple-choice questions from A-OKVQA (16k) (Marino et al., 2019), and grounded captions from Flickr30k (23k) (Young et al., 2014a). Our analysis revealed that this amalgamation of datasets significantly improved LMM capabilities on benchmark tests. Impressively, these results surpassed models (Dai et al., 2023; Li et al., 2023a; Laurenc¸on et al., 2023) trained on datasets an order of magnitude larger than ours, as evidenced by Table 7 and 4. For a comprehensive breakdown of each dataset’s influence, refer to Section 3.5. 2.3 HALLUCINATION-AWARE HUMAN PREFERENCE COLLECTION
2309.14525#17
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
18
2.3 HALLUCINATION-AWARE HUMAN PREFERENCE COLLECTION Inspired by the recent RLHF studies that collect helpfulness and harmlessness preferences (Bai et al., 2022b; Touvron et al., 2023b) separately, in this study, we decide to differentiate between responses that are merely less helpful and those that are inconsistent with the images (often characterized by multimodal hallucinations). To achieve this, we provide crowdworkers with the template illustrated in Table 2 to guide their annotations when comparing two given responses. With our current template design, we aim to prompt crowdworkers to identify potential hallucinations in the model’s responses. Nonetheless, our training process integrates a single reward model that emphasizes both multimodal alignment and overall helpfulness2. We collect human preferences on 10k hold-out LLaVA data by re-sampling the last response with our SFT model and a temperature of 0.7. The reward model is initialized from the SFT model to obtain the basic multimodal capabilities. 2.4 FACTUALLY AUGMENTED RLHF (FACT-RLHF)
2309.14525#18
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
19
2.4 FACTUALLY AUGMENTED RLHF (FACT-RLHF) We conduct multimodal RLHF on 50k hold-out LLaVA conversations, with additional 12k multi- choice questions from A-OKVQA and 10k yes/no questions subsampled from VQA-v2. Due to the concerns of existing hallucinations in the synthetic multi-round conversation data of LLaVA, we only use the first question in each conversation for RL training, which avoids the pre-existing hallucinations in the conversational context. Reward Hacking in RLHF In preliminary multimodal RLHF experiments, we observe that due to the intrinsic multimodal misalignment in the SFT model, the reward model is weak and sometimes cannot effectively detect hallucinations in the RL model’s responses. In the text domain, previous work (Bai et al., 2022a; Touvron et al., 2023b) proposed to iteratively collect “fresh” human feed- back. However, this can be quite costly and cannot effectively utilize existing human-annotated data and there is no guarantee that more preference data can significantly improve the discriminative capabilities of the reward model for multimodal problems. Facutual Augmentation To augment the capability of the reward model, we propose Factually Augmented RLHF (Fact-RLHF), where the reward model has access to additional ground-truth
2309.14525#19
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
20
Facutual Augmentation To augment the capability of the reward model, we propose Factually Augmented RLHF (Fact-RLHF), where the reward model has access to additional ground-truth 2We are considering the development of a distinct Honest reward model, inspired by the approach in Tou- vron et al. (2023b). This introduces the possibility of constructing a piecewise Honesty-prioritized reward model. We earmark this direction for future exploration. 5 # Preprint information such as image captions to calibrate its judgment. In original RLHF (Stiennon et al., 2020; OpenAI, 2022), the reward model needs to judge the quality of the response only based on the user query (i.e., the input image and prompt): Image: [IMAGE] User: [USER PROMPT] Assistant: [RESPONSE] Reward Model: [SCORE] In Factually Augmented RLHF (Fact-RLHF), the reward model has additional information about the textual descriptions of the image: Image: [IMAGE] Factual Information: [5 COCO IMAGE CAPTIONS / 3 A-OKVQA RATIONALS] User: [USER PROMPT] Assistant: [RESPONSE] Augmented Reward Model: [SCORE]
2309.14525#20
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
21
This prevents the reward model hacked by the policy model when the policy model generates some hallucinations that are clearly not grounded by the image captions. For general questions with COCO images, we concatenate the five COCO captions as the additional factual information, while for A-OKVQA questions, we use the annotated rationals as the factual information. The factually augmented reward model is trained on the same binary preference data as the vanilla reward model, except that the factual information is provided both during the model fine-tuning and inference. Symbolic Rewards: Correctness Penalty & Length Penalty In some of our RL data, certain questions come with a predetermined ground-truth answer. This includes binary choices (e.g., “Yes/No”) in VQA-v2 and multiple-choice options (e.g., “ABCD”) in A-OKVQA. These annota- tions can also be regarded as additional factual information. Therefore, in the Fact-RLHF algorithm, we further introduce a symbolic reward mechanism that penalizes selections that diverge from these ground-truth options.
2309.14525#21
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
22
Furthermore, we observed that RLHF-trained models often produce more verbose outputs, a phe- nomenon also noted by Dubois et al. (2023). While these verbose outputs might be favored by users or by automated LLM-based evaluation systems (Sun et al., 2023b; Zheng et al., 2023), they tend to introduce more hallucinations for LMMs. In this work, we follow Sun et al. (2023a) and incorporate the response length, measured in the number of tokens, as an auxiliary penalizing factor. 3 EXPERIMENTS 3.1 NEURAL ARCHITECTURES
2309.14525#22
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
23
3 EXPERIMENTS 3.1 NEURAL ARCHITECTURES Base Model We adopt the same network architecture as LLaVA (Liu et al., 2023a). Our LLM is based on Vicuna (Touvron et al., 2023a; Chiang et al., 2023), and we utilize the pre-trained CLIP visual encoder, ViT-L/14 (Radford et al., 2021). We use grid features both before and after the final Transformer layer. To project image features to the word embedding space, we employ a linear layer. It’s important to note that we leverage the pre-trained checkpoints of the linear projection matrix from LLaVA, concentrating on the end-to-end fine-tuning phase for multi-modal alignment in our study. For LLaVA-SFT+-7b, we use a Vicuna-V1.5-7b LLM and ViT-L/14 with image resolution 256 × 256. For LLaVA-SFT+-13b, we use a Vicuna-V1.5-13b LLM and ViT-L/14 with image resolution 336 × 336.
2309.14525#23
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
24
RL Models: Reward, Policy, and Value The architecture of the reward model is the same as the base LLaVA model, except that the embedding output of the last token is linearly projected to a scalar value to indicate the reward of the whole response. Following Dubois et al. (2023), we initialize the value model from the reward model. Therefore, when training an LLaVA-7B-based policy model with an LLavA-13B-based reward model, the value model is also of 13B size. To fit all the models (i.e., police, reward, value, original policy) into one GPU, we adopt LoRA (Hu et al., 2021) for all the fine-tuning processes in RLHF. We use Proximal Policy Optimization (PPO; 6 Preprint Table 3: Automatic evaluation of LLaVA-RLHF on the LLaVA-Bench Evaluation. GPT-4 compares the answers from the VLM model outputs with the answers by GPT-4 (text-only) and gives a rating. We report the relative scores (Liu et al., 2023a) of VLM models compared to GPT-4 (text-only).
2309.14525#24
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
25
Subsets Model Conv Detail Complex Full-Set LLaVA7B VIGC7B LLaVA-SFT+ 7B LLaVA-RLHF7B 75.1 83.3 88.8 93.0 75.4 80.6 74.6 79.0 92.3 93.1 95.0 109.5 81.0 85.8 86.3 94.1 LLaVA13BX336 VIGC13BX336 LLaVA-SFT+ 13B×336 LLaVA-RLHF13B×336 87.2 88.9 85.8 93.9 74.3 77.4 75.5 82.5 92.9 93.5 93.9 110.1 84.9 86.8 85.2 95.6 Overall Adversatial listic Comparison Counting Relation —— IDEFICS93 | —— Kosmos-2 — LLaVArsz2.336 —— IDEFICSgo3 = InstructBLIP;33 © — LLaVA-RLHF, 35 Figure 2: Detailed performance of different models on the eight categories in MMHAL-BENCH, where “Overall” indicates the averaged performance across all categories. The questions are col- lected by adversarially filtering on the original LLaVA13BX336 model.
2309.14525#25
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
27
# 3.2 MMHAL-BENCH DATA COLLECTION To quantify and evaluate the hallucination in LMM responses, we have created a new benchmark MMHAL-BENCH. There are two major differences between MMHAL-BENCH and previous VLM benchmarks: 1) Speciality: In contrast to prevalent LMM benchmarks Liu et al. (2023a;b); Li et al. (2023d) that evaluate the response quality in the general sense (e.g., helpfulness, relevance), we focus on determining whether there hallucination exists in the LMM responses. Our evaluation metrics are directly developed on this main criterion. 2) Practicality: Some previous LMM bench- marks Li et al. (2023d); Rohrbach et al. (2018) also examine hallucination, but they have limited the questions to yes/no questions, which we found the results may sometimes disagree with the de- tailed description generated by LMM. Instead of over-simplifying the questions, we adopt general, realistic, and open-ended questions in our MMHAL-BENCH, which can better reflect the response quality in practical user-LMM interactions. 7 # Preprint
2309.14525#27
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
28
7 # Preprint In MMHAL-BENCH, we have meticulously designed 96 image-question pairs, ranging in 8 question categories × 12 object topics. More specifically, we have observed that LMM often make false claims about the image contents when answering some types of questions, and thus design our questions according to these types: • Object attribute: LMMs incorrectly describe the visual attributes of invididual objects, such as color and shape. • Adversarial object: LMMs answers questions involving something that does not exist in the image, instead of pointing out that the referred object cannot be found. Comparison: LMMs incorrectly compare the attributes of multiple objects. • Counting: LMMs fail to count the number of the named objects. • Spatial relation: LMMs fail to understand the spatial relations between multiple objects in the response. Environment: LMMs make wrong inference about the environment of the given image. • Holistic description: LMMs make false claims about contents in the given image when giving a comprehensive and detailed description of the whole image. • Others: LMMs fail to recognize the text or icons, or incorrectly reason based on the observed visual information.
2309.14525#28
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
29
comprehensive and detailed description of the whole image. • Others: LMMs fail to recognize the text or icons, or incorrectly reason based on the observed visual information. We create and filter the questions in an adversarial manner. More specifically, we design the image- question pairs to ensure that the original LLaVA13BX336 model hallucinates when answering these questions. While these questions are initially tailored based on LLaVA13BX336’s behavior, we have observed that they also have a broader applicability, causing other LMMs to hallucinate as well. To avoid data leakage or evaluation on data that LMMs have observed during training, we select im- ages from the validation and test sets of OpenImages (Kuznetsova et al., 2020) and design all brand- new questions. Our image-question pairs cover 12 common object meta-categories from COCO (Lin et al., 2014), including “accessory”, “animal”, “appliance”, “electronic”, “food”, “furniture”, “in- door”, “kitchen”, “outdoor”, “person”, “sports”, and “vehicle”.
2309.14525#29
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
30
When evaluating LMMs on MMHAL-BENCH, we employ the powerful GPT-4 model (OpenAI, 2023) to analyze and rate the responses. Currently, the publically available GPT-4 API only sup- ports text input, so it cannot judge directly based on the image contents. Therefore, to aid GPT-4’s assessment, we also provide category names of the image content, and a standard human-generated answer in the prompt, in addition to the question and LMM response pair. Consequently, GPT-4 can determine whether hallucination exists in the LMM response by comparing it against the image content and the thorough human-generated answer. When provided with adequate information from MMHAL-BENCH, GPT-4 can make reasonable decisions aligned with human judgments. For exam- ple, when deciding whether hallucination exists in responses from LLaVA13BX336 and IDEFICS80B, GPT-4 agrees with human judgments in 94% of the cases. Please see the Appendix for the example image-question pairs and GPT-4 prompts we used for MMHAL-BENCH evaluation. 3.3 RESULTS
2309.14525#30
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
31
3.3 RESULTS We use LLaVA-Bench (Liu et al., 2023a) and our MMHAL-BENCH as our main evaluation met- rics for their high alignment with human preferences. In addition, we conducted tests on widely- recognized Large Multimodal Model benchmarks. We employed MMBench (Liu et al., 2023b), a multi-modal benchmark offering an objective evaluation framework comprising 2,974 multiple- choice questions spanning 20 ability dimensions. This benchmark utilizes ChatGPT to juxtapose model predictions against desired choices, ensuring an equitable assessment of VLMs across vary- ing instruction-following proficiencies. Furthermore, we incorporated POPE (Li et al., 2023d), a polling-based query technique, to offer an evaluation of Large Multimodal Model object perception tendencies.
2309.14525#31
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
32
High-quality SFT data is crucial for capability benchmarks. By delving into the specific per- formances for the capability benchmarks (i.e., MMBench and POPE), we observe a notable im- provement in capabilities brought by high-quality instruction-tuning data (LLaVA-SFT+) in Ta- bles 4 and 7. LLaVA-SFT+ 7B model exemplifies this with an impressive performance of 52.1% on MMBench and an 82.7% F1 score on POPE, marking an improvement over the original LLaVA by margins of 13.4% and 6.7% respectively. However, it’s worth noting that LLaVA-SFT+ does 8 Preprint Table 4: CircularEval multi-choice accuracy results on MMBench dev set. We adopt the following abbreviations: LR for Logical Reasoning; AR for Attribute Reasoning; RR for Relation Reason- ing; FP-C for Fine-grained Perception (Cross Instance); FP-S for Fine-grained Perception (Single Instance); CP for Coarse Perception. Baseline results are taken from Liu et al. (2023b).
2309.14525#32
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
33
LLM Data Overall LR AR RR FP-S FP-C CP OpenFlamingo9B MiniGPT-47B LLaMA-Adapter7B Otter-I9B Shikra7B Kosmos-2 InstructBLIP7B IDEFICS9B IDEFICS80B InstructBLIP13B LLaVA7B LLaVA-SFT+ 7B LLaVA-RLHF7B LLaVA13B×336 LLaVA-SFT+ 13B×336 LLaVA-RLHF13B×336 - 5k 52k 2.8M 5.5M 14M 1.2M 1M 1M 1.2M 158k 220k 280k 158k 220k 280k 6.6 24.3 41.2 51.4 58.8 59.2 36.0 48.2 54.6 44.0 38.7 52.1 51.4 47.5 57.5 60.1 4.2 7.5 11.7 32.5 25.8 46.7 14.2 20.8 29.0 19.1 16.7 28.3 24.2 23.3 25.8 29.2 15.4 31.3 35.3 56.7 56.7 55.7 46.3 54.2 67.8 54.2 48.3 63.2 63.2 59.7
2309.14525#33
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
35
trail behind models like Kosmos and Shikra. Despite this, LLaVA-SFT+ stands out in terms of sample efficiency, utilizing only 280k fine-tuning data—a 5% fraction of what’s employed by the aforementioned models. Furthermore, this enhancement isn’t confined to just one model size. When scaled up, LLaVA-SFT+ 13BX336 achieves commendable results, attaining 57.5% on MMBench and 82.9% on POPE. Comparatively, the effect of RLHF on the capability benchmarks is more mixed. LLaVA-RLHF shows subtle degradations at the 7b scale, but the 13b LLaVA-RLHF improves over LLaVA-SFT+ by 3% on MMBench. This phenomenon is similar to the Alignment Tax observed in previous work (Bai et al., 2022a). Nonetheless, with our current empirical scaling law of LLaVA- RLHF, we believe RLHF alignment would not damage in general capabilities of LMMs for models of larger scales.
2309.14525#35
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
36
RLHF improves human alignment benchmarks further. From another angle, even though high- quality instruction data demonstrates large gains in capability assessment, it does not improve much on human-alignment benchmarks including LLaVA-Bench and MMHAL-BENCH, which is also evident in recent LLM studies (Wang et al., 2023). LLaVA-RLHF show a significant improvement in aligning with human values. It attains scores of 2.05 (7b) and 2.53 (13b) on MMHAL-BENCH and improves LLaVA-SFT+ by over 10% on LLaVA-Bench. We also presented qualitative examples in Table 1, which shows LLaVA-RLHF produces more reliable and helpful outputs. 3.4 ABLATION ANALYSIS We conduct ablation studies on LLaVA7B and evaluate over the four aforementioned benchmarks. 3.5 ABLATION ON HIGH-QUALITY INSTRUCTION-TUNING DATA
2309.14525#36
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
37
We conduct ablation studies on LLaVA7B and evaluate over the four aforementioned benchmarks. 3.5 ABLATION ON HIGH-QUALITY INSTRUCTION-TUNING DATA In Table 5, we evaluate the impact of individual instruction-tuning datasets. For the sake of sim- plicity, we did not adjust the mixture rate, earmarking that consideration for future research. Our findings indicate that A-OKVQA (Schwenk et al., 2022) contributes significantly to performance enhancements, boosting results by +9.8% on MMBench and a more modest +3.8% on POPE. In contrast, VQA-v2 (Goyal et al., 2017a) is particularly influential on POPE, where it leads to a 6% improvement, while only having a slight impact on MMBench. This differential can possibly be attributed to the overlapping “Yes/No” format in VQA and the multiple-choice structure of A- OKVQA. Flickr30k notably enhances the performance in LLaVA-Bench and MMHAL-BENCH — a 9 Preprint Table 5: Abalation studies on methodologies (SFT, RLHF, and Fact-RLHF), data mixtures (LLaVa with additional datasets), and model sizes of the policy model (PM) and the reward model (RM).
2309.14525#37
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
38
SFT Data Method SFT SFT SFT SFT SFT PM RM 7b 7b 7b 7b 7b - - - - - VQA AOK Flickr ✗ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✗ ✓ ✗ ✗ ✗ ✓ ✓ MMBench 38.7 42.9 48.5 37.8 52.1 POPE LLaVA-B MMHAL-B 76.0 82.0 79.8 77.6 82.7 81.0 30.4 34.7 46.6 86.3 1.3 2.0 1.1 1.5 1.8 RLHF RLHF RLHF Fact-RLHF 7b 7b 7b 7b 7b 7b 13b 13b ✗ ✓ ✓ ✓ ✗ ✓ ✓ ✓ ✗ ✓ ✓ ✓ 40.0 50.8 48.9 51.4 78.2 82.7 82.7 81.5 85.4 87.8 93.4 94.1 1.4 1.8 1.8 2.1 likely consequence of the inherently grounded nature of the task. Furthermore, amalgamating these three datasets results in compounded performance gains across various capability benchmarks. 3.6 ABLATION ON FACT-AUGMENTED RLHF
2309.14525#38
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
39
3.6 ABLATION ON FACT-AUGMENTED RLHF We compare the performance of Fact-Augmented RLHF (Fact-RLHF) with standard RLHF in Ta- ble 5. Our findings indicate that while the conventional RLHF exhibits improvement on LLaVA- Bench, it underperforms on MMHAL-BENCH. This can be attributed to the model’s tendency, during PPO, to manipulate the naive RLHF reward model by producing lengthier responses rather than ones that are less prone to hallucinations. On the other hand, our Fact-RLHF demonstrates en- hancements on both LLaVA-Bench and MMHAL-BENCH. This suggests that Fact-RLHF not only better aligns with human preferences but also effectively minimizes hallucinated outputs. 3.7 DATA FILTERING V.S. RLHF
2309.14525#39
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
40
3.7 DATA FILTERING V.S. RLHF In our preliminary tests, we employed the Fact-RLHF reward model to filter out 70%, 50%, and 30% of LLaVA data. Subsequently, we finetuned an LLaVA model on this filtered data, yielding scores of 81.2, 81.5, and 81.8 on LLaVA-Bench. However, performance on MMHAL-BENCH , POPE, and MMBench remained largely unchanged. We believe this stagnation can be attributed to two factors: the absence of a negative feedback mechanism preventing the model from identifying hallucinations in its output, and the potential limitations of our Fact-RLHF reward model, especially when compared against the high-capacity oracle models in previous successful studies (Touvron et al., 2023b). # 4 RELATED WORK
2309.14525#40
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
41
Large Multimodal Models Recent success in Large Language Models (LLMs) such as GPTs (Brown et al., 2020; OpenAI, 2023), PaLM (Chowdhery et al., 2022; Anil et al., 2023), BLOOM (Scao et al., 2022; Muennighoff et al., 2022), LLaMA (Touvron et al., 2023a;b), Al- paca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) has spurred significant improvements in multi-modal models. Flamingo (Alayrac et al.) pioneered integrating LLMs into vision-language pretraining, utilizing gated cross-attention dense blocks to adapt to visual features; its open-source variant is OpenFlamingo (Awadalla et al., 2023) and IDEFICS (Laurenc¸on et al., 2023). PaLI (Chen et al., 2022; 2023b) studies the scaling factor of V&L components across a wide range of tasks. PaLM-E(Driess et al., 2023) further extends LMM to the embodied domain. BLIP-2 (Li et al., 2023c) introduced the Querying Transformer
2309.14525#41
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
42
et al., 2023) further extends LMM to the embodied domain. BLIP-2 (Li et al., 2023c) introduced the Querying Transformer (Q-former) to bridge the gap between image and lan- guage encoders, which was further improved by InstructBLIP (Dai et al., 2023). Otter (Li et al., 2023b;a) focuses on enhancing OpenFlamingo’s instruction-following capability. MiniGPT-4 (Zhu et al., 2023) suggests GPT4’s prowess is due to sophisticated LLMs and recommends using a sin- gle project layer to align visual and linguistic models. It showcases abilities akin to GPT4 but is computationally efficient. mPLUG-Owl (Ye et al., 2023) offers a new approach: initially aligning
2309.14525#42
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
43
10 # Preprint visual features and then fine-tuning the language model using LoRA (Hu et al., 2021). Recently, QWen-VL (Bai et al., 2023) scales the pre-training of LMM to 1.4B data and achieves impressive results across benchmarks. Among them, LLaVA (Liu et al., 2023a; Lu et al., 2023) pioneered LMM work by harnessing GPT4 (OpenAI, 2023) for generating vision-language tuning datasets similar to text instruction efforts (Wei et al., 2021; Chung et al., 2022; Longpre et al., 2023; Sanh et al., 2021; Mukherjee et al., 2023; Taori et al., 2023; K¨opf et al., 2023). However, due to the syntactic nature of these generated datasets, misalignments between image and text modalities are prevalent. Our research is the first to address this misalignment through RLHF.
2309.14525#43
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
44
Hallucination Prior to the advent of LLMs, the NLP community primarily defined “hallucination” as the generation of nonsensical content or content that deviates from its source (Ji et al., 2023). The introduction of versatile LLMs has expanded this definition, as outlined by (Zhang et al., 2023) into: 1) Input-conflicting hallucination, which veers away from user-given input, exemplified in machine translation (Lee et al., 2018; Zhou et al., 2020); 2) Context-conflicting hallucination where output contradicts prior LLM-generated information (Shi et al., 2023); and 3) Fact-conflicting hallucina- tion, where content misaligns with established knowledge (Lin et al., 2021). Within the LMM realm, “object hallucination” is well-documented (Rohrbach et al., 2018; MacLeod et al., 2017; Li et al., 2023d; Biten et al., 2022), referring to models producing descriptions or captions including objects that don’t match or are missing from the target image. We expand on this, encompassing any LMM- generated description unfaithful to image aspects, including relations, attributes, environments, and so on. Consequently, we present MMHAL-BENCH, aiming to holistically pinpoint and measure hallucinations in LMMs.
2309.14525#44
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
45
# 5 DISCUSSIONS & LIMITATIONS Hallucination phenomena are observed in both Large Language Models (LLMs) and Large Multi- modal Models (LMMs). The potential reasons are two-fold. Firstly, a salient factor contributing to this issue is the low quality of instruction tuning data for current LMMs, as they are typically synthesized by more powerful LLMs such as GPT-4. We expect our proposed high-quality vision instruction-tuning data and future efforts on manually curating high-quality vision instruction tuning data can alleviate this problem. Secondly, the adoption of behavior cloning training in instruction-tuned LMMs emerges as another fundamental cause (Schulman, 2023). Since the instruction data labelers lack insight into the LMM’s visual perception of an image, such training inadvertently conditions LMMs to speculate on uncer- tain content. To circumvent this pitfall, the implementation of reinforcement learning-based training provides a promising avenue, guiding the model to articulate uncertainties more effectively (Lin et al., 2022; Kadavath et al., 2022). Our work demonstrates a pioneering effort in this direction. Figure 3 illustrates the two sources of hallucination in current behavior cloning training of LLMs.
2309.14525#45
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
46
However, while LLaVA-RLHF enhances human alignment, reduces hallucination, and encourages truthfulness and calibration, applying RLHF can inadvertently dampen the performance of small- sized LMMs. Balancing alignment enhancements without compromising the capability of LMM and LLM is still an unresolved challenge. Furthermore, though we’ve demonstrated the effective use of linear projection in LLaVA with top-tier instruction data, determining an optimal mixture and scaling it to bigger models remains intricate. Our research primarily delves into the fine-tuning phase of VLMs, leaving the issues of misalignment in other modalities and during pre-training yet to be explored. Finally, while MMHAL-BENCH emphasizes the evaluation of LMMs with an aim to curtail hal- lucinations, it is noteworthy that short or evasive responses can inadvertently attain high scores on MMHAL-BENCH. This underlines an intrinsic trade-off between honesty and helpfulness (Bai et al., 2022a). Consequently, for a more comprehensive assessment of alignment with human pref- erences, we advocate for the evaluation of prospective LMMs using both MMHAL-BENCH and LLaVA-Bench. 11 Preprint # 6 CONCLUSION
2309.14525#46
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
47
11 Preprint # 6 CONCLUSION We proposed several strategies to tackle the multimodal misalignment problems, particularly for vision language models (VLMs), which often produce text inconsistent with the associated images. First, we enrich GPT-4 generated vision instruction tuning data from LLaVA with existing human- authored image-text pairs. Next, we adopt the Reinforcement Learning from Human Feedback (RLHF) algorithm from the text domain to bridge vision-language gaps, wherein human evaluators discern and mark the more hallucinated output. We train the VLM to optimize against simulated human preferences. Moreover, we introduce the Factually Augmented RLHF, leveraging additional factual information such as image captions to enhance the reward model, countering reward hack- ing in RLHF, and boosting model performance. For tangible real-world impact assessment, we have devised MMHAL-BENCH, an evaluation benchmark targeting the penalization of hallucina- tion. Remarkably, LLaVA-RLHF, being the first VLM trained with RLHF, shows a notable surge in performance across benchmarks. We opensource our code, and data and hope our findings could help the future development of more reliable and human-aligned LLMs and LMMs. # REFERENCES
2309.14525#47
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
48
# REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open- arXiv preprint source framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023.
2309.14525#48
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
49
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a.
2309.14525#49
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
50
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Ols- son, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran- Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mer- cado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Con- erly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022b.
2309.14525#50
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]
2309.14525
51
Ali Furkan Biten, Llu´ıs G´omez, and Dimosthenis Karatzas. Let there be a clock on the beach: In Proceedings of the IEEE/CVF Winter Reducing object hallucination in image captioning. Conference on Applications of Computer Vision, pp. 1381–1390, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023a. 12 Preprint Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. PaLI: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022.
2309.14525#51
Aligning Large Multimodal Models with Factually Augmented RLHF
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
http://arxiv.org/pdf/2309.14525
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
cs.CV, cs.CL
Preprint
null
cs.CV
20230925
20230925
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2204.02311" }, { "id": "2305.10403" }, { "id": "2305.03047" }, { "id": "2211.05100" }, { "id": "2308.12966" }, { "id": "2301.12652" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2306.16527" }, { "id": "2211.01786" }, { "id": "2306.04751" }, { "id": "2209.06794" }, { "id": "2303.03378" }, { "id": "2305.11206" }, { "id": "2207.05221" }, { "id": "2307.09288" }, { "id": "2205.14334" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2307.06281" }, { "id": "1809.02156" }, { "id": "1506.02438" }, { "id": "2306.15195" }, { "id": "2306.02707" }, { "id": "2304.10592" }, { "id": "2204.05862" }, { "id": "2301.12597" }, { "id": "2308.01390" }, { "id": "2305.14387" }, { "id": "2309.01219" }, { "id": "1909.08593" }, { "id": "2305.03726" }, { "id": "2011.02593" }, { "id": "2106.09685" }, { "id": "2305.06500" }, { "id": "2109.07958" }, { "id": "2305.18565" }, { "id": "2309.09958" } ]