id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
2309.16797#128 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Context 1: Q. 5156 A. A1 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) five thousand, one hundred and fifty-six. Take the input and write it in words as a natural number take the input, write it in British English and insert the comma. Therefore, the correct answer is (a) five thousand, one hundred and fifty-six. Q. 5000 A. 0 Take the input Context 2: Q. 8692 A. A1 Take the input and write it in words as a natural number in British English. Therefore, the correct answer is (a) eight thousand, six hundred and ninety-two. Take the input and write it in words as a natural number take the input, write it in British English and insert the comma. Therefore, the correct answer is (a) eight thousand, six hundred and ninety-two. Q. 10000 A. 0 Take the 54 K.1.18 TRANSLATION ENGLISH-GERMAN Prompt 0: You may have to get a little... ; Prompt 1: 0x641: cmp esi,ebx ; Contexts Context 0: Q. relation A. Write the English instruction as though it is German.: Die Beziehung ist eine Verbindung zwischen zwei oder mehr Menschen. Die Beziehung kann eine Freundschaft, eine Ehe, eine Partnerschaft oder eine andere Art von Beziehung sein. Translate the English word to German.: Therefore, the correct answer is ( Translate the English word to German.: Therefore, the correct answer is (a) Beziehung. Context 1: Q. camp A. You may have to get a little... ; Das Lager ist ein Ort, an dem Menschen leben oder arbeiten. Das Lager kann ein Zeltlager, ein Ferienlager oder ein anderes Lager sein. Translate the English word to German.: Therefore, the correct answer is ( 0x641: cmp Therefore, the correct answer is (a) Lager. esi,ebx ; Q. A. Das ist eine Frage, die nicht so einfach zu beantworten ist. Es gibt Context 2: | 2309.16797#127 | 2309.16797#129 | 2309.16797 | [
"2305.03495"
] |
2309.16797#129 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Q. lecture A. You may have to get a little... ; Der Vortrag ist eine Rede, die vor einem Publikum gehalten wird. Der Vortrag kann eine Prasentation, ein Vortrag oder eine andere Art von Vortrag sein. Translate the English word to German.: Therefore, the correct answer is ( Translate 0x641: cmp Therefore, the correct answer is (a) Vortrag. esi,ebx ; Q. A. Der Vortrag ist eine Rede, die vor einem Publikum gehalten wird. Der Vortrag | 2309.16797#128 | 2309.16797#130 | 2309.16797 | [
"2305.03495"
] |
2309.16797#130 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | 55 56 K.1.19 TRANSLATION ENGLISH-SPANISH # Prompt 0:Traduccion al espanol -> QUOTE: Translate to Spanish Prompt 1: In Spanish, these five words are: Contexts Context 0: Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish: oso Q. bear Translate these five English words to Spanish. Therefore, the correct answer is (a) oso. Context 1: Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity A. Translate to Spanish: relatividad Q. relativity Translate these five English words to Spanish. Therefore, the correct answer is (a) relatividad. | 2309.16797#129 | 2309.16797#131 | 2309.16797 | [
"2305.03495"
] |
2309.16797#131 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Q. bear A. Translate to Spanish: oso Q. bear A. Translate to Spanish Context 2: Q. concert A. Translate to Spanish: concierto Q. concert hall A. Translate to Spanish: sala de conciertos Q. concertmaster A. Translate to Spanish: concertino Q. concerto A. Translate to Spanish: concierto Q. Translate these five English words to Spanish. Therefore, the correct answer is (a) concierto. 43) 57 K.1.20 TRANSLATION ENGLISH-FRENCH Prompt 0: Iâ ve translated 5 words from English to French: Prompt 1: Translate to French # Contexts Context 0: Q. destiny A. Iâ | 2309.16797#130 | 2309.16797#132 | 2309.16797 | [
"2305.03495"
] |
2309.16797#132 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | ve translated 5 words from English to French: English French destiny destin destiny destin destiny destin destiny destin destiny destin Translate to French Therefore, the correct answer is (destin). Q. destiny A. Iâ ve translated 5 words from English to French: English Context 1: Q. ideology A. Iâ ve translated 5 words from English to French: English French ideology ideologie ideology ideologie ideology ideologie ideology ideologie ideology ideologie Translate to French Therefore, the correct answer is (ideologie). | 2309.16797#131 | 2309.16797#133 | 2309.16797 | [
"2305.03495"
] |
2309.16797#133 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | 58 Q. ideology A. Iâ ve translated 5 words from English to French: English Context 2: Q. representation A. Iâ ve translated 5 words from English to French: English French representation representation representation representation representation representation representation representation representation represent Translate to French Therefore, the correct answer is (representation). Q. representation A. Iâ ve translated 5 words from English to French: English 59 K.1.21 SENTIMENT ANALYSIS Prompt 0: Tell if each review was posted by someone with generally positive or negative language Prompt 1:write the number of positive examples # Contexts Context 0: Q. | 2309.16797#132 | 2309.16797#134 | 2309.16797 | [
"2305.03495"
] |
2309.16797#134 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Despite its faults, Gangs excels in spectacle and pacing. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an Given a movie review, indicate whether it is positive or negative Therefore, the correct answer is (â positiveâ , Context 1: Q. I simply canâ t recommend it enough. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an write the number of positive examples Therefore, the correct answer is (â positiveâ , Context 2: Q. Thereâ s a lot to recommend Read My Lips. A. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. B. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for every one of the inputs. C. I gave a friend an instruction and five inputs. The friend read the instruction and wrote an write the number of positive examples Therefore, the correct answer is (â positiveâ , 60 K.1.22 SENTENCE SIMILARITY Prompt 0: . Prompt 1: !:There are two kinds of problems / hints that you will see:!< Context 0: Q. Sentence 1: | 2309.16797#133 | 2309.16797#135 | 2309.16797 | [
"2305.03495"
] |
2309.16797#135 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | The polar bear is sliding on the snow. Sentence 2: A polar bear is sliding across the snow. A. : Read the two sentences and decide how well the second sentence expresses the same idea as the first. Then enter the number of the answer option that best describes your decision. 1. The two sentences express the same idea. 2. The two sentences express the same idea, but the second sentence is not as clear as the first. 3. The two sentences express the same idea, but the second sentence is : Given two sentences, your algorithm must determine the similarity between the sentences on a scale of 0 to 5. A score of 0 means the sentences have nothing in common. A score of 5 means the sentences are semantically similar. The Therefore, the correct answer is 5. Context 1: Q. Sentence 1: Iranian President praises nuclear deal Sentence 2: Iran arrests several spies near nuclear plant A. . | 2309.16797#134 | 2309.16797#136 | 2309.16797 | [
"2305.03495"
] |
2309.16797#136 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different ideas. 2: The sentences are about the same topic, and express similar ideas. 3: The sentences are about the : Therefore, the correct answer is 1. Context 2: Q. Sentence 1: A girl is playing a flute. Sentence 2: A band is playing on a stage. A. .Output the score according to the following rules: 0: The sentences have nothing in common. 1: The sentences are about the same topic, but express different ideas. 2: The sentences are about the same topic, and express similar ideas !: There are two kinds of problems / hints that you will see:!< Therefore, the correct answer is 1. 61 K.1.23 WORD IN CONTEXT Prompt 0: Determine whether the given word is used with the same meaning in both sentences. Write Prompt 1: Decide whether the given word is used in the same meaning in both sentences. Contexts Context 0: Q. Sentence 1: | 2309.16797#135 | 2309.16797#137 | 2309.16797 | [
"2305.03495"
] |
2309.16797#137 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | The Times is not the voice of New York. Sentence 2: The voice of the law. Word: voice A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is yes. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (yes). Context 1: Q. Sentence 1: Do you communicate well with your advisor? Sentence 2: He and his sons havenâ t communicated for years. Word: communicate A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is yes. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (yes). Context 2: Q. Sentence 1: Can you take me to the main entrance? Sentence 2: Take a scene. Word: take A. Determine whether the given word is used with the same meaning in both sentences. Writeyes or no. The answer is no. Decide whether the given word is used in the same meaning in both sentences. Therefore, the correct answer is (no). | 2309.16797#136 | 2309.16797#138 | 2309.16797 | [
"2305.03495"
] |
2309.16797#138 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | # L ABLATIONS We performed ablation to measure the impact of various self-referential components of Prompt- breeder. We investigated the following mutation operators and mechanisms: Random initial prompts The original problem specification for the dataset is used instead of generating an initial task-prompt using the mutation prompt + thinking style + problem specification. â ¢ Random initial mutation prompts The mutation-prompt â Please summarize and improve the following instruction:â is used instead of randomly selecting a mutation-prompt from the list. â ¢ Prompts from context (Lamarckian) 62 Proportion of fitnesses above baseline (Full algorithm) 100% ADDSUB - -13 -11 -23 -26 AQUA_DEV - -11 S_STRATEGY_QA - GSM - 0% MULTIARITH % of fitnesses above baseline SINGLEEQ - STRATEGY_QA SVAMP - -21 -10 \ \ -100% Hyper Lamarck SR task-prompt SR mut-prompts ablation_mode Figure 4: The results of ablating the one by one the self-referential operators compared to using the full algorithm. 0% signifies an ablated operation with neither positive nor negative impact. From left to right (Hyper = Removal of mutation-prompt mutation, Lamarck = Removal of Context to task- prompt mutation, SR task-prompt = Removal of thinking-style guided task-prompt initialization, SR mut-prompt = Removal of random selection of a mutation-prompt from the mutation-prompt list.) . Percentage scores close to â 100% indicate that removing the operation results in lower fitness at equivalent points in the run; conversely scores close to 100% mean that the operation is actively harmful, because individuals have higher fitnesses at equivalent points in the run when that operation is removed. The Lamarckian mutation operator that generates a task-prompt from a correct context is replaced with the default zero-/first-order prompt mutation operation (50:50 chance of one or the other) Meta-mutation (mutating mutation-prompts) When meta-mutation would normally take place the default zero-/first-order prompt muta- tion operation is performed (50:50 chance of one or the other) For each dataset and each ablation, we use a population of 10 for 200 evaluations (equivalent to 20 generations, similar to larger experiments in this paper) and compare to the complete algorithm with the same population size and no ablations. | 2309.16797#137 | 2309.16797#139 | 2309.16797 | [
"2305.03495"
] |
2309.16797#139 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | To measure how effective an ablated operation is, we determine the proportion of evaluations in the ablation that were higher than the baseline evaluations at each generation, and sum these over all generations in the run. The results in Figure 4 show that in most cases all the mutation operators have a positive impact on fitness, with the Random Initial Prompts having the largest positive impact across all datasets. We also investigated the influence of different mutation operators on the ETHOS hate speech de- tection dataset (Mollas et al., 2022) with the under-specified problem specification "Solve the | 2309.16797#138 | 2309.16797#140 | 2309.16797 | [
"2305.03495"
] |
2309.16797#140 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | 63 Problem" (in contrast to the standard problem specification "Determine whether a text contains hate speech"). Promptbreeder achieved a score of 81.6%. The greatest deteriora- tion happens when removing the Lamarckian â from context to promptâ mutation method which induces the instruction from an example of the correct working out (64.6%). The second greatest detriment to performance happens when removing random initialization of mutation prompts, ran- dom initialization of prompts, and hyper-mutation of mutation prompts simultaneously, leaving only context mutation (68.7%). Adding back online mutation increases performance back to 70.4% and adding random mutation prompts brings this back up to 73.7%. This demonstrates the interplay and importance of Promptbreederâ s diverse set of mutation operators. 64 | 2309.16797#139 | 2309.16797 | [
"2305.03495"
] |
|
2309.15088#0 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | 3 2 0 2 p e S 6 2 ] R I . s c [ 1 v 8 8 0 5 1 . 9 0 3 2 : v i X r a # RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models # Ronak Pradeepâ â , Sahel Sharifymoghaddamâ , Jimmy Lin # David R. Cheriton School of Computer Science, University of Waterloo, Canada {rpradeep, sahel.sharifymoghaddam, jimmylin}@uwaterloo.ca # Abstract Researchers have successfully applied large language models (LLMs) such as ChatGPT to reranking in an information retrieval context, but to date, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields exper- imental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky founda- tions. To address this significant shortcom- ing, we present RankVicuna, the first fully open-source LLM capable of performing high- quality listwise reranking in a zero-shot set- ting. Experimental results on the TREC 2019 and 2020 Deep Learning Tracks show that we can achieve effectiveness comparable to zero-shot reranking with GPT3.5 with a much smaller 7B parameter model, although our ef- fectiveness remains slightly behind reranking with GPT4. We hope our work provides the foundation for future research on reranking with modern LLMs. All the code necessary to reproduce our results is available at https: //github.com/castorini/rank_llm. # Introduction The widespread availability of instruction fine- tuned large language models (LLMs) has led to an explosion of applications in various natural lan- guage processing and information retrieval tasks. In the context of text retrieval, we have seen multi- ple efforts focused on zero-shot listwise reranking using LLMs (Sun et al., 2023; Ma et al., 2023), but unfortunately, to date, they have all relied on proprietary models. While such models support rapid prototyping, particularly when exposed as API endpoints, the reproducibility of experimental results that build on them is suspectâ both from the normative perspective of what is â good scienceâ | 2309.15088#1 | 2309.15088 | [
"2301.02998"
] |
|
2309.15088#1 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | and the practical perspective of obtaining reliable and deterministic measurements of experimental results. It would, of course, be desirable for the community to have access to a fully open-source LLM and associated code infrastructure capable of performing high-quality reranking. RankVicuna provides exactly this: To our knowl- edge, we present the first open-source large lan- guage model for zero-shot listwise document reranking. Experimental validation on test collec- tions from the TREC 2019 and 2020 Deep Learning Tracks (Craswell et al., 2020, 2021) shows that the effectiveness of our model is on par with zero-shot reranking using GPT3.5, but slightly worse than reranking with GPT4. However, we can achieve these results with a much smaller model with only 7B parameters while still constrained to a GPT3.5 teacher. We share our model checkpoints and asso- ciated code, providing a valuable resource for the research community. During the process of building RankVicuna, we have gained several important insights that we share: First, we confirm that proprietary LLMs are indeed effective at reranking in a zero-shot man- ner (Sun et al., 2023; Ma et al., 2023), although they exhibit several shortcomings. Beyond the obvi- ous issue of non-reproducibility, results from these models are also non-deterministic, which makes them unreliable for rigorous scientific research. Ad- ditionally, proprietary LLMs occasionally fail to follow the requested format in their responses. In contrast, RankVicuna is open-source, deterministic, and always generates well-formed responses. Second, we examine the impact of first-stage retrieval methods on downstream reranking effec- tiveness and find that RankVicuna consistently im- proves over the baseline retrieved results. We also find that with an effective first-stage retriever, even a single pass with reranking only the top 20 candi- dates brings an improvement similar to reranking the top 100 candidates. | 2309.15088#0 | 2309.15088#2 | 2309.15088 | [
"2301.02998"
] |
2309.15088#2 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | # â Equal contribution. Finally, our experiments shed some light on the importance of training strategies that involve data augmentation to ensure model robustness against shuffled candidates or variations in initial retrieval quality. However, we note that data augmenta- tion techniques affect the quality of model out- puts under â idealâ conditions, and thus we face an effectivenessâ robustness tradeoff. Our work lays a solid foundation for future re- search. By making our models and infrastructure available to the public, we hope to stimulate further exploration and innovation in reranking. We an- ticipate that our findings will guide researchers in developing more effective and efficient reranking models. As the demand for accurate and reliable information retrieval systems continues to grow in this age of retrieval-augmented LLMs, we expect our work to contribute to future advances. # 2 Background and Related Work Given a corpus C = {D1, D2, ..., Dn} containing a collection of documents and an information need expressed as a query q, the task of a retriever is to efficiently return a list of k documents from C that are most relevant to the query q according to some metric such as nDCG or average precision, where k â ª |C|. The task of a reranker is to further im- prove the quality of the ranked list produced by the retriever or another upstream reranker, according to either the same or a different metric. Retrievers and rerankers together form multi- stage ranking pipelines for text ranking, which have been studied in the context of transformer models (Nogueira et al., 2019; Gao et al., 2021) but date back well over a decade (Matveeva et al., 2006; Cambazoglu et al., 2010; Wang et al., 2011). Nogueira and Cho (2019) were the first to demon- strate the use of (encoder-only) transformer models for reranking (using BERT) with a simple cross- encoder architecture they called monoBERT. While neural rerankers had been explored extensively by researchers prior to the advent of BERT, the monoBERT model represented a significant ad- vance in effectiveness; see Lin et al. (2021b) for a historical overview. | 2309.15088#1 | 2309.15088#3 | 2309.15088 | [
"2301.02998"
] |
2309.15088#3 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Following monoBERT, other researchers have explored reranking using decoder-only transformer models (Nogueira dos Santos et al., 2020) and full encoderâ decoder models (Nogueira et al., 2020; Zhuang et al., 2022). These approaches are effec- tive but require copious amounts of training data in the form of (query, relevant passage) pairs; of- ten, the MS MARCO dataset (Bajaj et al., 2016) is used for such purposes. Most of the early work on reranking with transformers can be character- ized as a pointwise approach, where the relevance of a particular candidate document is estimated independently of others. More recently, however, researchers have ad- dressed this shortcoming by incorporating pair- wise and listwise losses in their cross-encoder ap- proaches (Gao et al., 2021; Pradeep et al., 2022b; Zhuang et al., 2022). Using hard negatives in com- bination with such losses yields systems that are better at reranking in high-precision settings and that align more closely to the first-stage retriever. In contrast, our work focuses on the zero-shot setting, where the model is not provided any task- specific supervised training (e.g., relevant queryâ passage pairs). We build on a recent thread of work (Sun et al., 2023; Ma et al., 2023; Qin et al., 2023) that directly uses LLMs as rerankers in a multi-stage ranking pipeline, primarily focusing on prompt engineering to accomplish the reranking task. We coin the term â prompt-decodersâ (in con- trast to BERT-style cross-encoders) to characterize this class of rerankers. Furthermore, since these models are not fine-tuned or benefit from in-context learning, we might describe this type of reranking model as a zero-shot prompt-decoder. To use an open-source LLM as a prompt-decoder, Qin et al. (2023) adopted a pairwise approach since FLAN- UL2 is not capable of reordering a list of input documents. We find the same shortcoming to be also true for Vicuna, but we address this by using RankGPT3.5 as its teacher. | 2309.15088#2 | 2309.15088#4 | 2309.15088 | [
"2301.02998"
] |
2309.15088#4 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Rerankers depend on an upstream source to sup- ply candidate documents, which can be a first-stage retriever or another reranker. In all our experi- ments, we rely on a first-stage retriever to generate a candidate list of documents from the corpus. Re- searchers have explored a variety of sparse, dense, and hybrid retrieval techniques, but these are not the focus of our study. We refer interested readers to Lin (2021) and Lin et al. (2021b) for an overview of such models. In another relevant thread, recent work such as InPars (Bonifacio et al., 2022; Boytsov et al., 2023) and Promptagator (Dai et al., 2022) explored us- ing LLMs to generate synthetic queries for docu- ments to craft relevant queryâ document pairs as training data for retrievers or rerankers. Similarly, HyDE (Gao et al., 2023) used LLMs to augment queries by generating hypothetical documents for unsupervised dense retrieval. Related, Sachan et al. (2023) proposed ART, a novel approach to train- ing a dense passage retriever starting only with questions, which outperforms the standard refer- ence dense retrieval model DPR (Karpukhin et al., 2020). In the emerging paradigm of generative retrieval, Pradeep et al. (2023) explored different document representation strategies and found syn- thetic queries to be necessary for effectiveness as the corpus size increases. However, all these ap- proaches take advantage of large language models indirectly. Finally, we note that rerankers have gained addi- tional prominence in recent months with the intro- duction of commercially available API endpoints. Examples include Cohereâ s Rerank API1 and Mi- crosoftâ s Semantic Search API in Azure Cognitive Search.2 The existence of these production services suggests that reranking models have attained ma- turity beyond explorations in research laboratories, and that rerankers address a real-world problem. # 3 Methods # 3.1 Prompt Design Recent work (Ma et al., 2023) has shown that zero- shot listwise LLM-based rerankers outperform their pointwise counterparts since the former can attend to multiple documents simultaneously to determine their relative positions in a relevance ranking. | 2309.15088#3 | 2309.15088#5 | 2309.15088 | [
"2301.02998"
] |
2309.15088#5 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | We build on this finding and define our ranking prob- lem as follows: Given a user query q and candidate documents {D1, . . . , Dn} from the previous stage, the task is to return a reordered list of the input doc- ument identifiers that improves a retrieval metric such as nDCG. Our prompt listwise reranking is similar to the RankGPT prompt (Sun et al., 2023), but accounts for differences between Vicuna and GPT; specifically, we use the default system description for Vicuna. In addition, we modified the prompt to show that the answer can, and in many cases should, deviate from the identity ordering, [1] > [2] > . . . > [m]. The exact input prompt to Vicuna is shown in Figure 1. We prepend the prompt with the system descrip- tion, which, in Vicunaâ s case, is â A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite an- # 1https://cohere.com/rerank 2https://learn.microsoft.com/en-us/azure/search/ semantic-search-overview USER: I will provide you with {num} passages, each indicated by a numerical identifier []. Rank the passages based on their relevance to the search query: {query}. [1] {passage 1} [2] {passage 2} ... [{num}] {passage {num}} Search Query: {query}. Rank the {num} passages above based on their relevance to the search query. All the passages should be included and listed using identifiers, in descending order of relevance. The output format should be [] > [], e.g., [4] > [2]. Only respond with the ranking results, do not say any word or explain. Figure 1: User Input for both RankVicuna and our repli- cation of RankGPT. swers to the userâ s questions.â | 2309.15088#4 | 2309.15088#6 | 2309.15088 | [
"2301.02998"
] |
2309.15088#6 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | We hope that align- ing our model with the exact prompt setup used to train Vicuna would help generate higher-quality ranked lists for our task. # 3.2 RankVicuna We leveraged RankGPT3.5 as a teacher model for Vicuna to prompt-decode high-quality ranked lists. More specifically, we trained RankVicuna on the ranked lists generated by RankGPT3.5 for the 100K training set queries provided by Sun et al. (2023). To generate this dataset, the authors randomly sampled 100K queries from the MS MARCO v1 passage ranking training set and retrieved 20 candidates using BM25 for each query using Py- serini (Lin et al., 2021a). Then, these candidates were passed into RankGPT3.5 to generate teacher orderings, which we distill down to our student, RankVicuna. Since both RankGPT3.5 and Rank- Vicuna are not directly exposed to human-labeled relevant queryâ passage pairs, our approach can still be considered zero-shot. To ensure higher quality and more robust trained models, we took the following additional steps: | 2309.15088#5 | 2309.15088#7 | 2309.15088 | [
"2301.02998"
] |
2309.15088#7 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | â ¢ We did not train on malformed generations. More specifically, examples with incorrect list format- ting, missing document identifiers, or repetitions were excluded from the training set. This is im- portant as we find that about 12% of the outputs were malformed, and we desire a model that con- sistently generates a well-formed ordering. â ¢ Besides including the original generations pro- vided by the teacher, which reranks the top 20 re- sults by BM25 (Robertson and Zaragoza, 2009), we also include a condition where the input or- der is shuffled. Our hope is that this exposes the model to a more complex reordering task while not incurring additional data generation costs. However, we still retain the original BM25 input ordering, as we believe it is important to model â | 2309.15088#6 | 2309.15088#8 | 2309.15088 | [
"2301.02998"
] |
2309.15088#8 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | successâ , given it is the closest to what the model sees during inference. All RankVicuna settings in the rest of the paper involve this data augmentation (DA) process unless specified. We trained our 7B parameter RankVicuna for two epochs with an effective batch size of 128 and a learning rate of 2 Ã 10â 5 in bfloat16. Training took roughly 80 hours on four NVIDIA RTX A6000 GPUs. The Vicuna model that served as our initial weights can be found under lmsys/vicuna-7b-v1.5 in the HuggingFace Hub. This model is instruction fine-tuned from Metaâ s LLaMA-v2 model (Touvron et al., 2023). It is worth noting that the â out-of-the-boxâ | 2309.15088#7 | 2309.15088#9 | 2309.15088 | [
"2301.02998"
] |
2309.15088#9 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Vi- cuna model, which was not trained on the Rank- GPT3.5 data, completely fails at the reranking task, often simply returning an identity ordering or a malformed generation. # 4 Experimental Setup To demonstrate the effectiveness of RankVicuna, we compared it with existing representative unsu- pervised ranking methods (BM25 and Contriever) as well as our replications of two closed-source prompt-decoder models: LRL (Ma et al., 2023) with GPT3.5 and RankGPT (Sun et al., 2023), with both GPT3.5 and GPT4, which we refer to as Rank- GPT3.5 and RankGPT4, respectively. GPT3.5 refers to the model dubbed gpt-3.5-turbo in the Open- AI suite while GPT4 refers to gpt-4. We also com- pared RankVicuna with our replication of PRP- Sliding-10 from Qin et al. (2023), albeit with Vi- cuna (7B parameters). For these experiments, we used Vicuna instead of FLAN-T5 or FLAN-UL2 because we wanted an apples-to-apples compari- son with the same base LLM. Additionally, we note that the FLAN mixture, used to pretrain the mod- els, includes the MS MARCO QA task,3 thereby rendering the results suspect from the perspective of zero-shot retrieval. 3https://github.com/google-research/FLAN/blob/ e9e4ec6e2701182c7a91af176f705310da541277/flan/ v2/flan_collection_info.csv#L1032 We evaluated our methods using test collections from the TREC 2019 and 2020 Deep Learning Tracks (Craswell et al., 2020, 2021), using query and relevance judgments from the passage retrieval tasks. These tasks use the MS MARCO v1 passage corpus (Bajaj et al., 2016), which contains 8.8 mil- lion passages. For convenience, we refer to these datasets as DL19 and DL20. We report effective- ness in terms of nDCG@10 and average precision at a rank cutoff of 100 (denoted MAP@100). | 2309.15088#8 | 2309.15088#10 | 2309.15088 | [
"2301.02998"
] |
2309.15088#10 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | The context size is 4096 for Vicuna and GPT3.5 and 8192 for GPT4. To reorder the top 100 can- didates for each query given these context sizes, we used a sliding window similar to RankGPT and LRL. In our experiments, we have adopted the same values as RankGPT (window size 20, stride 10) to isolate the impact of window and stride size in our comparisons. Unlike RankVicuna, we (surprisingly) observe non-deterministic outputs for GPT3.5 and GPT4, even with a temperature of zero. For these two models, we report the mean over six and three runs, respectively, with 99% confidence intervals. We limited the number of GPT4 runs to three due to our computation budget. In all our reranking experiments, we replaced any reference of the form [n] in the passages with (n) to avoid confusing the models. We also lever- aged ftfyâ s fix_text method to preprocess any input sent to the rerankers. # 5 Results Table 1 compares different reranking pipelines us- ing data from DL19 and DL20. Rows (1) and (2) report baselines using two first-stage retrievers, BM25 and Contriever (Izacard et al., 2021). The remaining rows (besides the last one) report the results of using zero-shot LLM rerankers to reorder top 100 candidate documents retrieved by BM25. Rows (6) and (7) show scores of two variants of PRP-Sliding-10, FLAN-T5-XXL and FLAN-UL2, directly copied from Qin et al. (2023). The final row represents our best system, where we apply RankVicuna to rerank the top 100 candidates gener- ated by SPLADE++ EnsembleDistil (Formal et al., 2021), a state-of-the-art neural first-stage sparse retrieval method. As expected, all LLM rerankers outperform the baseline (first-stage) methods. The effectiveness of RankVicuna, with 7B parameters, is on par with the effectiveness of RankGPT3.5, with 175B pa- Source DL19 DL20 Prev. | 2309.15088#9 | 2309.15088#11 | 2309.15088 | [
"2301.02998"
] |
2309.15088#11 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Top-k nDCG@10 MAP@100 nDCG@10 MAP@100 (1) BM25 (2) Contriever None None |C| 0.5058 |C| 0.6164 0.2476 0.3163 0.4796 0.5986 0.2685 0.3309 (3) LRL (GPT3.5) BM25 100 0.6451±0.003 0.3035±0.004 0.6099±0.004 0.3496±0.004 (4) RankGPT3.5 (5) RankGPT4 BM25 BM25 100 0.6855±0.006 100 0.7500±0.002 0.3335±0.002 0.3703±0.004 0.6202±0.005 0.7036±0.004 0.3525±0.002 0.4134±0.004 (6) PRP-Sliding-10 (FLAN-T5-XXL) (7) PRP-Sliding-10 (FLAN-UL2) BM25 BM25 100 0.6700 100 0.7265 - - 0.6735 0.7046 - - (8) PRP-Sliding-10 (Vicuna) BM25 100 0.5606 0.2735 0.5367 0.2990 (9) RankVicuna (10) RankVicuna BM25 100 0.6682 SPLADE++ ED 100 0.7459 0.3316 0.4416 0.6549 0.7473 0.3789 0.5183 Table 1: nDCG@10 and MAP@100 on DL19 and DL20 for different reranking pipelines, with BM25 and Contriever as baselines. Each reranker uses the top 100 retrieved results of the previous stage as input. Rows (3â 4) and row (5) represent averages of six and three runs, respectively. We directly copied results in rows (6â | 2309.15088#10 | 2309.15088#12 | 2309.15088 | [
"2301.02998"
] |
2309.15088#12 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | 7) from Qin et al. (2023). All other results are from our own experiments. OK Wrong Format Repetition Missing Total RankGPT3.5 RankGPT4 RankVicuna 838.67 830.33 873 0 40.67 0 1.16 1.67 0 33.16 0.33 0 873 873 873 Table 2: The number of malformed responses for each reranking method. Reported numbers for RankGPT3.5 and RankGPT4 are averages of three and six runs, respectively. rameters. Specifically, compared to its teacher RankGPT3.5, RankVicuna achieves higher scores on DL20 but slightly lower scores on DL19. Com- pared with another zero-shot reranking method, LRL, which uses RankGPT3.5, RankVicuna demon- strates considerably higher effectiveness on both DL19 and DL20. We note that PRP-Sliding-10 (FLAN-T5-XXL) with 11B parameters is comparable to RankVicuna both in terms of model size and effectiveness. Other than being fully open-source, our main ad- vantage over PRP-Sliding-10 (FLAN-T5-XXL) is the prompt cost: to bring the top 10 most relevant candidates to the top of the list, PRP-Sliding-10 (FLAN-T5-XXL) requires each passage to be in- cluded in â ¼40 prompts on average. In contrast, we only require two prompts for our listwise ap- proach with a sliding window of size 20 and a stride of 10. Furthermore, training on the FLAN mixture, which includes the MS MARCO QA task, calls into question the validity of PRP-Sliding-10 (FLAN-T5-XXL) as a true zero-shot method. We suspect this to be a contributing factor to the effec- tiveness gap between PRP-Sliding-10 (FLAN-T5- XXL) and PRP-Sliding-10 (Vicuna). 10 (FLAN-T5-UL2) with 20B parameters outper- form RankVicuna. | 2309.15088#11 | 2309.15088#13 | 2309.15088 | [
"2301.02998"
] |
2309.15088#13 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | This could be because, in ad- dition to the differences in model sizes, the effec- tiveness of RankVicuna is bounded by its teacher, RankGPT3.5. Finally, in row (10), we used RankVicuna to rerank the top 100 candidates from SPLADE++ EnsembleDistil instead of BM25. This combina- tion achieves effectiveness on par with RankGPT4 with an open-source model that is more than two orders of magnitude smaller. Table 2 shows the number of malformed re- sponses generated by the RankGPT variants and RankVicuna, which we have grouped into the fol- lowing categories: | 2309.15088#12 | 2309.15088#14 | 2309.15088 | [
"2301.02998"
] |
2309.15088#14 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | 1. Wrong Format: includes responses that do not follow the requested format. For example, when RankGPT4 refuses to generate a sorted list, its response falls into this category. 2. Repetition: includes responses that contain re- peated document ids. 3. Missing: includes responses with missing docu- ment ids. Not surprisingly, both RankGPT4 (rumored to contain more than 1T parameters) and PRP-Sliding- Since RankVicuna is deterministic, we report the results of a single run. For every request in this Source DL19 DL20 Prev. | 2309.15088#13 | 2309.15088#15 | 2309.15088 | [
"2301.02998"
] |
2309.15088#15 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Top-k nDCG@10 MAP@100 nDCG@10 MAP@100 (1a) BM25 (1b) RankVicuna (1c) RankVicuna None BM25 BM25 |C| 0.5058 20 0.6164 100 0.6682 0.2476 0.2867 0.3316 0.4796 0.5986 0.6549 0.2685 0.3194 0.3789 (2a) BM25 + RM3 (2b) RankVicuna (2c) RankVicuna None BM25 + RM3 BM25 + RM3 |C| 0.5216 20 0.6053 100 0.6588 0.2807 0.3110 0.3573 0.4896 0.5825 0.6567 0.2821 0.3323 0.3991 (3a) OpenAI ada2 (3b) RankVicuna (3c) RankVicuna None OpenAI ada2 OpenAI ada2 |C| 0.7035 20 0.7448 100 0.7374 0.4151 0.4398 0.4409 0.6759 0.7101 0.7210 0.4587 0.4718 0.4755 (4a) DistillBERT KD TASB (4b) RankVicuna (4c) RankVicuna |C| 0.7210 None DistillBERT KD TASB 20 0.7588 DistillBERT KD TASB 100 0.7551 0.4050 0.4121 0.4170 0.6854 0.7404 0.7049 0.4520 0.4648 0.4620 (5a) SPLADE++ ED (5b) RankVicuna (5c) RankVicuna None SPLADE++ ED SPLADE++ ED |C| 0.7308 20 0.7532 100 0.7459 0.4464 0.4491 0.4416 0.7197 0.7455 0.7473 0.4826 0.5150 0.5183 | 2309.15088#14 | 2309.15088#16 | 2309.15088 | [
"2301.02998"
] |
2309.15088#16 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Table 3: nDCG@10 and MAP@100 for RankVicuna with different first-stage candidate generation methods. For each method, reranking is performed using the top 20 or 100 candidates. run, RankVicuna returned a correctly formatted response. In contrast, for RankGPT3.5 and Rank- GPT4, we averaged the results of six and three runs, respectively. Both RankGPT methods occa- sionally return malformed responses. Most of the malformed responses from RankGPT3.5 are miss- ing documents in the ordered list; when malformed, RankGPT4 mostly refuses to rank. Repetition is a rare problem for both RankGPT methods. | 2309.15088#15 | 2309.15088#17 | 2309.15088 | [
"2301.02998"
] |
2309.15088#17 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | # 6 Ablation Studies candidates improves effectiveness by 30%â 45% for all metrics, the improvement for SPLADE++ ED is only 2%â 4% for the same metrics. This is a commonly noted phenomenon across multi-stage ranking systems (Pradeep et al., 2021, 2022b,a). Comparing top 20 vs. top 100 results shows that reranking more candidates generally results in a higher MAP@100. However, in cases where the first-stage effectiveness is â good enoughâ , rows (3â 5) for DL19 and rows (4â 5) for DL20, reranking only the top 20 candidates achieves an nDCG@10 score on par with reranking the top 100 candidates. # 6.1 First-Stage Candidate Generation To evaluate the impact of the quality and quan- tity of the generated candidates on the final results, we repeated our experiments with the following five first-stage retrieval methods using either top 20 or top 100 retrieved results: (1) BM25 (Robert- son and Zaragoza, 2009), (2) BM25+RM3 (Abdul- Jaleel et al., 2004), (3) OpenAI ada2 (Neelakantan et al., 2022; Lin et al., 2023), (4) DistillBERT KD TASB (Hofstätter et al., 2021), (5) SPLADE++ En- sembleDistil (ED) (Formal et al., 2022). The first two represent strong traditional â bag-of-wordsâ re- trieval baselines; the others represent a sample of effective neural first-stage retrievers that are com- monly seen in research studies today. OpenAI ada2 and DistillBERT KD TASB are dense retrieval methods, while SPLADE++ ED is a sparse one. Our experiment shows that as the first-stage effectiveness increases, additional improvements from RankVicuna decrease (see Table 3). For ex- ample, while RankVicuna over the top 100 BM25 # 6.2 Data Augmentation Section 3.2 discussed the training process of Rank- Vicuna, highlighting the use of data augmentation (DA) as a crucial step in our training pipeline. | 2309.15088#16 | 2309.15088#18 | 2309.15088 | [
"2301.02998"
] |
2309.15088#18 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | To recap, the DA process involves shuffling the input order of the documents and permuting the origi- nal generations provided by the teacher. This step exposes the model to a more complex reordering task, which hopefully enhances its robustness and effectiveness. In this section, we study the dependence of Rank- Vicuna on the order of generated candidates. We compared two versions of the model: (1) the default version trained using Data Augmentation (DA), and (2) a variant trained without DA. Experimental results are shown in Table 4. Using BM25 as the first stage, our experiments show that RankVicuna without DA results in worse effectiveness than using RankVicuna with DA. Source DL19 DL20 Prev. Top-k nDCG@10 MAP@100 nDCG@10 MAP@100 (1a) RankVicuna (1b) RankVicuna (1c) RankVicuna (1d) RankVicuna BM25 Shuf. BM25 SPLADE++ ED Shuf. SPLADE++ ED 100 0.6682 100 0.6702±0.009 100 0.7459 100 0.7271±0.009 0.3316 0.2977±0.006 0.4416 0.3860±0.008 0.6549 0.6537±0.006 0.7473 0.7071±0.007 0.3789 0.3553±0.006 0.5183 0.4312±0.006 (2a) RankVicuna (w/o DA) (2b) RankVicuna (w/o DA) (2c) RankVicuna (w/o DA) (2d) RankVicuna (w/o DA) BM25 Shuf. BM25 SPLADE++ ED Shuf. | 2309.15088#17 | 2309.15088#19 | 2309.15088 | [
"2301.02998"
] |
2309.15088#19 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | SPLADE++ ED 100 0.6612 100 0.5893±0.017 100 0.7653 100 0.5893±0.010 0.3254 0.2666±0.011 0.4672 0.3289±0.009 0.6420 0.5293±0.010 0.7536 0.5373±0.020 0.3612 0.2754±0.007 0.5180 0.3406±0.013 Table 4: nDCG@10 and MAP@100 of two variants of RankVicuna with different first-stage candidate generation methods. For each method, reranking is performed using top 100 candidates from the previous step on six shuffled orderings. We report average metrics and with 99% confidence intervals. 0.7 0 1 @ G C D n 0.6 RankVicuna on DL19 PRPVicuna on DL19 RankVicuna on DL20 PRPVicuna on DL20 0.5 0 1 7 2 8 5 Number of Sliding Window Passes 3 4 6 9 10 Figure 2: Comparing the effectiveness of RankVicuna vs. PRPVicuna on DL19 and DL20, varying the number of times the ranked list is progressively refined. The zeroth pass corresponds to the BM25 run. When we replace BM25 with SPLADE++ ED, RankVicuna without DA outperforms RankVicuna with DA. While data augmentation can cause a small drop in effectiveness (depending on the first stage), it makes the model less vulnerable to poor quality candidates (whether intentional or not), as shown by Qin et al. (2023) in methods like PRP- Sliding-10 and RankGPT3.5. represents the number of sliding window passes, ranging from 0 to 10, and the y-axis represents the nDCG@10 score. We plot four curves, each repre- senting a combination of a reranking method and a dataset. The solid lines show results on DL19 and the dashed lines show results on DL20. | 2309.15088#18 | 2309.15088#20 | 2309.15088 | [
"2301.02998"
] |
2309.15088#20 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | The blue lines represent the RankVicuna method and the red lines represent the PRPVicuna method (Qin et al., 2023). To showcase this vulnerability, we provided both model variants with shuffled candidate documents (rows b and d). The results show that the model without DA exhibited a significant effectiveness drop (up to 34%) and higher variance among dif- ferent runs. In contrast, the default model, which is more robust due to its exposure to a more complex reordering task, better retained its effectiveness (comparing rows b vs. a and d vs. c, respectively, for each version). We see that, for both datasets, RankVicuna con- sistently outperforms PRPVicuna. The nDCG@10 score for RankVicuna on DL19 starts at 0.5058 and increases to 0.6837 at the second pass, remaining relatively stable thereafter. The score for Rank- Vicuna on DL20 follows a similar pattern, starting at 0.4796 and rising to about 0.6604 at pass four, albeit at a slower pace after the first pass. On the other hand, the nDCG@10 scores for PRPVicuna on both datasets increase gradually with each pass but remain far below RankVicuna. # 6.3 Effect of Progressive Reranking Finally, Figure 2 compares the effectiveness of two reranking methods, RankVicuna and a variant of PRP-Sliding from Qin et al. (2023), we call PRPVi- cuna, on two datasets, DL19 and DL20. | 2309.15088#19 | 2309.15088#21 | 2309.15088 | [
"2301.02998"
] |
2309.15088#21 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | The x-axis This plot suggests that RankVicuna is more ef- fective than PRPVicuna and that multiple passes of the sliding window have a minimal impact as an effectiveness boost for RankVicuna. It is also worth noting that a single pass of reranking with both methods takes about the same time, around 30 seconds per query using a batch size of one on an RTX A6000 GPU. These results show that RankVicuna is much more efficient and achieves quicker convergence to the best possible results. This is likely because PRPVicuna handles only two passages at a time, whereas RankVicuna attends to 20 passages simultaneously, resulting in more effective relevance estimation. # 7 Conclusion In this study, we introduce RankVicuna, a listwise zero-shot reranking approach powered by an open- source large language model, Vicuna. Experimen- tal studies show that our model achieves effective- ness on par with much larger models. We also quan- titatively demonstrated the stability of RankVicuna results compared to closed-source counterparts. Along the way, we explored many aspects of prompt-decoder models for reranking, including the impact of first-stage retrievers on downstream effectiveness. Our work also sheds light on the importance of data augmentation for system robust- ness, which plays a vital role in ensuring stability in the face of document shuffling and variations in initial retrieval quality. In summary, RankVicuna advances zero-shot reranking for information retrieval, demonstrating the potential of large language models to enhance search effectiveness, even in data-scarce settings. We are able to achieve high-quality reranking using fully open-source models, which provides a firm foundation for the rest of the research community to build on. As we further refine and expand these techniques, we anticipate exciting opportunities for integrating large language models into end-to-end information access applications. # Acknowledgments This research was supported in part by the Nat- ural Sciences and Engineering Research Council (NSERC) of Canada. # References Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fer- nando Diaz, Leah Larkey, Xiaoyan Li, Donald Met- zler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In Proceedings of the | 2309.15088#20 | 2309.15088#22 | 2309.15088 | [
"2301.02998"
] |
2309.15088#22 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Thirteenth Text REtrieval Conference (TREC 2004), Gaithersburg, Maryland. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, An- drew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:arXiv:1611.09268v3. | 2309.15088#21 | 2309.15088#23 | 2309.15088 | [
"2301.02998"
] |
2309.15088#23 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. 2022. InPars: Unsupervised dataset generation for information retrieval. In Pro- ceedings of the 45th International ACM SIGIR Con- ference on Research and Development in Information Retrieval (SIGIR 2022), pages 2387â 2392, Madrid, Spain. Leonid Boytsov, Preksha Patel, Vivek Sourabh, Riddhi Nisar, Sayani Kundu, Ramya Ramanathan, and Eric Nyberg. 2023. InPars-Light: Cost-effective unsuper- vised training of efficient rankers. arXiv:2301.02998. B. Barla Cambazoglu, Hugo Zaragoza, Olivier Chapelle, Jiang Chen, Ciya Liao, Zhaohui Zheng, and Jon De- genhardt. 2010. | 2309.15088#22 | 2309.15088#24 | 2309.15088 | [
"2301.02998"
] |
2309.15088#24 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Early exit optimizations for additive machine learned ranking systems. In Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM 2010), pages 411â 420, New York, New York. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820. Zhuyun Dai, Vincent Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, and Ming-Wei Chang. 2022. | 2309.15088#23 | 2309.15088#25 | 2309.15088 | [
"2301.02998"
] |
2309.15088#25 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Promptaga- tor: Few-shot dense retrieval from 8 examples. arXiv:2209.11755. Thibault Formal, Carlos Lassance, Benjamin Pi- wowarski, and Stéphane Clinchant. 2021. SPLADE v2: Sparse lexical and expansion model for informa- tion retrieval. arXiv:2109.10086. Thibault Formal, Carlos Lassance, Benjamin Pi- wowarski, and Stéphane Clinchant. 2022. From dis- tillation to hard negative sampling: Making sparse neural ir models more effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), page 2353â 2359, Madrid, Spain. Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. | 2309.15088#24 | 2309.15088#26 | 2309.15088 | [
"2301.02998"
] |
2309.15088#26 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Re- think training of BERT rerankers in multi-stage re- trieval pipeline. In Proceedings of the 43rd European Conference on Information Retrieval (ECIR 2021). Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2023. Precise zero-shot dense retrieval without rel- In Proceedings of the 61st Annual evance labels. Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1762â | 2309.15088#25 | 2309.15088#27 | 2309.15088 | [
"2301.02998"
] |
2309.15088#27 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | 1777, Toronto, Canada. Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Ef- ficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 2021), pages 113â 122. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. arXiv:2112.09118. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â 6781, Online. Jimmy Lin. 2021. | 2309.15088#26 | 2309.15088#28 | 2309.15088 | [
"2301.02998"
] |
2309.15088#28 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | A proposed conceptual framework for a representational approach to information re- trieval. arXiv:2110.01529. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356â | 2309.15088#27 | 2309.15088#29 | 2309.15088 | [
"2301.02998"
] |
2309.15088#29 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | 2362. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021b. Pretrained Transformers for Text Ranking: BERT and Beyond. Morgan & Claypool Publishers. Jimmy Lin, Ronak Pradeep, Tommaso Teofili, and Jasper Xian. 2023. Vector search with OpenAI em- beddings: Lucene is all you need. arXiv:2308.14963. Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and listwise docu- Zero-shot reranking with a large language model. Jimmy Lin. 2023. ment arXiv:2305.02156. Irina Matveeva, Chris Burges, Timo Burkard, Andy Lau- cius, and Leon Wong. 2006. | 2309.15088#28 | 2309.15088#30 | 2309.15088 | [
"2301.02998"
] |
2309.15088#30 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006), pages 437â 444, Seattle, Washington. Arvind Neelakantan, Tao Xu, Raul Puri, Alec Rad- ford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and code embeddings by con- trastive pre-training. arXiv preprint arXiv: Arxiv- 2201.10005. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085. Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre- In Findings trained sequence-to-sequence model. of the Association for Computational Linguistics: EMNLP 2020, pages 708â 718. Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. | 2309.15088#29 | 2309.15088#31 | 2309.15088 | [
"2301.02998"
] |
2309.15088#31 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Multi-stage document ranking with BERT. arXiv:1910.14424. Cicero Nogueira dos Santos, Xiaofei Ma, Ramesh Nalla- pati, Zhiheng Huang, and Bing Xiang. 2020. Beyond [CLS] through ranking by generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1722â 1727, Online. Ronak Pradeep, Kai Hui, Jai Gupta, Adam D. Lelkes, Honglei Zhuang, Jimmy Lin, Donald Metzler, and Vinh Q. Tran. 2023. | 2309.15088#30 | 2309.15088#32 | 2309.15088 | [
"2301.02998"
] |
2309.15088#32 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | How does generative retrieval scale to millions of passages? arXiv:2305.11841. Ronak Pradeep, Yilin Li, Yuetong Wang, and Jimmy Lin. 2022a. Neural query synthesis and domain-specific ranking templates for multi-stage clinical trial match- ing. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), pages 2325â 2330, Madrid, Spain. Ronak Pradeep, Yuqi Liu, Xinyu Zhang, Yilin Li, An- drew Yates, and Jimmy Lin. 2022b. | 2309.15088#31 | 2309.15088#33 | 2309.15088 | [
"2301.02998"
] |
2309.15088#33 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Squeezing water from a stone: A bag of tricks for further improving cross-encoder effectiveness for reranking. In Pro- ceedings of the 44th European Conference on Infor- mation Retrieval (ECIR 2022), Part I, pages 655â 670, Stavanger, Norway. Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv:2101.05667. Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Ben- dersky. 2023. Large language models are effec- tive text rankers with pairwise ranking prompting. arXiv:2306.17563. | 2309.15088#32 | 2309.15088#34 | 2309.15088 | [
"2301.02998"
] |
2309.15088#34 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends in Information Re- trieval, 3(4):333â 389. Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, and Manzil Zaheer. 2023. Questions are all you need to train a dense passage retriever. Transactions of the Association for Computational Linguistics, 11:600â 616. Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. | 2309.15088#33 | 2309.15088#35 | 2309.15088 | [
"2301.02998"
] |
2309.15088#35 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | Is ChatGPT good at search? Investigating large language models as re-ranking agent. arXiv:2304.09542. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schel- ten, Ruan Silva, Eric Michael Smith, Ranjan Sub- ramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv: 2307.09288. Lidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proceedings of the 34th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011), pages 105â | 2309.15088#34 | 2309.15088#36 | 2309.15088 | [
"2301.02998"
] |
2309.15088#36 | RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models | 114, Beijing, China. Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, and Michael Bendersky. 2022. RankT5: Fine- tuning T5 for text ranking with ranking losses. arXiv:2210.10634. | 2309.15088#35 | 2309.15088 | [
"2301.02998"
] |
|
2309.14525#0 | Aligning Large Multimodal Models with Factually Augmented RLHF | 3 2 0 2 p e S 5 2 ] V C . s c [ 1 v 5 2 5 4 1 . 9 0 3 2 : v i X r a Preprint ALIGNING LARGE MULTIMODAL MODELS WITH FACTUALLY AUGMENTED RLHF Zhiqing Sunâ â , Sheng Shenâ â £, Shengcao Caoâ â ¢ Haotian Liuâ ¡, Chunyuan Liâ ®, Yikang Shenâ ³, Chuang Ganâ â â ³, Liang-Yan Guiâ â ¢ Yu-Xiong Wangâ â ¢, Yiming Yangâ â , Kurt Keutzerâ â £, Trevor Darrellâ â £ â £UC Berkeley, â CMU, â ¢UIUC, â ¡UWâ Madison, â UMass Amherst â ®Microsoft Research, â ³MIT-IBM Watson AI Lab # ABSTRACT Large Multimodal Models (LMM) are built across modalities and the misalign- ment between two modalities can result in â hallucinationâ , generating textual out- puts that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pin- point the more hallucinated one, and the vision-language model is trained to max- imize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with addi- tional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image- text pairs to improve the general capabilities of our model. To evaluate the pro- posed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. | 2309.14525#1 | 2309.14525 | [
"2302.13971"
] |
|
2309.14525#1 | Aligning Large Multimodal Models with Factually Augmented RLHF | As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improve- ment by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io. # INTRODUCTION Large Language Models (LLMs; Brown et al. (2020); Chowdhery et al. (2022); OpenAI (2023)) can delve into the multimodal realm either by further pre-training with image-text pairs (Alayrac et al.; Awadalla et al., 2023) or by fine-tuning them with specialized vision instruction tuning datasets (Liu et al., 2023a; Zhu et al., 2023), leading to the emergence of powerful Large Multimodal Models (LMMs). Yet, developing LMMs faces challenges, notably the gap between the volume and quality of multimodal data versus text-only datasets. Consider the LLaVA model (Liu et al., 2023a), which is initialized from a pre-trained vision encoder (Radford et al., 2021) and an instruction-tuned language model (Chiang et al., 2023). It is trained on just 150K synthetic image-based dialogues, which is much less in comparison to the text-only models (Flan (Longpre et al., 2023) utilizing over 100M examples spanning 1800 tasks. Such limitations in data can lead to misalignment between the vision and language modalities. Consequently, LMMs may produce hallucinated outputs, which are not accurately anchored to the context provided by images. To mitigate the challenges posed by the scarcity of high-quality visual instruction tuning data for LMM training, we introduce LLaVA-RLHF, a vision-language model trained for improved mul- timodal alignment. One of our key contributions is the adaptation of the Reinforcement Learning from Human Feedback (RLHF) (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a), a general and scalable alignment paradigm that shows great success for text-based AI agents, to the â Equal contribution. Ordering is determined by dice rolling. â | 2309.14525#0 | 2309.14525#2 | 2309.14525 | [
"2302.13971"
] |
2309.14525#2 | Aligning Large Multimodal Models with Factually Augmented RLHF | Equal advising. 1 Preprint Question: Where is this photo taken? Ground Truth: From the slogan â Welcome to Houston, Texasâ , we can infer that this photo is taken at an airport in Houston, Texas. LLaVA: This photo is taken at an airport. LLaVA-SFT+: This photo is taken at the Houston airport. LLaVA-RLHF: This photo is taken in the baggage claim area of an airport, specifically in the lobby of the George Bush Intercontinental Airport in Houston, Texas. The large sign in the background indicates the airportâ s name and location. The presence of multiple people, handbags, and suitcases in the scene further supports the fact that this is indeed an airport setting. Question: What is the gender of the person riding the motorcycle in the image? Ground Truth: No one is riding the motorcycle in the image. LLaVA: The person riding the motorcycle in the image is a woman. LLaVA-SFT+: The person riding the motorcycle in the image is a man. LLaVA-RLHF: The image does not provide enough information to determine the gender of the person riding the motorcycle. It only shows a motorcycle parked on the side of the road, and there is no indication of the riderâ s gender. The rider could be a man or a woman, or even someone who identifies as non- binary or gender non-conforming. It is important to avoid making assumptions about a personâ s gender based on visual cues alone, as these assumptions can be inaccurate and potentially harmful. | 2309.14525#1 | 2309.14525#3 | 2309.14525 | [
"2302.13971"
] |
2309.14525#3 | Aligning Large Multimodal Models with Factually Augmented RLHF | Table 1: Qualitative examples to illustrate the effect of RLHF on Large Multimodal Model. LLaVA- RLHF is presented to be more helpful (above) and less hallucinated (bottom). multimodal alignment for LMMs. By collecting human preferences with an emphasis on detect- ing hallucinations1, and utilizes those preferences in reinforcement learning for LMM fine-tuning (Ziegler et al., 2019; Stiennon et al., 2020). This approach can improve the multimodal alignment with a relatively low annotation cost, e.g., collecting 10K human preferences for image-based con- versations with $3000. To the best of our knowledge, this approach is the first successful adaptation of RLHF to multimodal alignment. A potential issue with the current RLHF paradigm is called reward hacking, which means achieving high scores from the reward model does not necessarily lead to improvement in human judgments. To prevent reward hacking, previous work (Bai et al., 2022a; Touvron et al., 2023b) proposed to iteratively collect â | 2309.14525#2 | 2309.14525#4 | 2309.14525 | [
"2302.13971"
] |
2309.14525#4 | Aligning Large Multimodal Models with Factually Augmented RLHF | freshâ human feedback, which tends to be costly and cannot effectively utilize existing human preference data. In this work, we propose a more data-efficient alternative, i.e., we try to make the reward model capable of leveraging existing human-annotated data and knowledge in larger language models. Firstly, we improve the general capabilities of the reward model by using a better vision encoder with higher resolutions and a larger language model. Secondly, we introduce a novel algorithm named Factually Augmented RLHF (Fact-RLHF), which calibrates the reward signals by augmenting them with additional information such as image captions or ground-truth multi-choice option, as illustrated in Fig. 1. 1We instructed crowdworkers to prioritize the responses that exhibit better multimodal alignment and min- imize hallucinations. That is, if two responses are free of hallucinations, the crowdworkers were asked to choose/create a more helpful one. 2 Preprint (a) Misaligned Supervised Fine-Tuning (SFT) Data contains Hallucination 8. LMM-SFT â A: The sleeping environment on the couch provides the cat with a comfortable and cozy space to rest. >ao-> Output (A) is better Human with less hallucinations. A: The cat is resting on a black couch with its front paws tucked under its chest. (b) Collect Human Preference (More Helpful & Less Hallucinated) Data for Reward Models (RM) [The sign is not very clear, so perhaps] A: American Fast Food LMM-RLHF LMM-RM Q: What is in the image? | Javier's Tacos â Mexican Fast Food â Open 24 hours [The RL mode''s output is clearly contradictory to the image captions} Reward Score: 0.0 (c) Factually Augmented Reinforcement Learning from Human Feedback (Fact-RLHF) Figure 1: Illustration of how hallucination may occur during the Supervised Fine-Tuning (SFT) phase of LMM training and how Factually Augmented RLHF alleviates the issue of limited capacity in the reward model which is initialized from the SFT model. To improve the general capabilities of LMMs during the Supervised Fine-Tuning (SFT) stage, we further augment the synthetic vision instruction tuning data (Liu et al., 2023a) with existing high- quality human-annotated multi-modal data in the conversation format. | 2309.14525#3 | 2309.14525#5 | 2309.14525 | [
"2302.13971"
] |
2309.14525#5 | Aligning Large Multimodal Models with Factually Augmented RLHF | Specifically, we convert VQA-v2 (Goyal et al., 2017a) and A-OKVQA (Schwenk et al., 2022) into a multi-round QA task, and Flickr30k (Young et al., 2014b) into a Spotting Captioning task (Chen et al., 2023a), and train the LLaVA-SFT+ models based on the new mixture of data. Lastly, we look into assessing the multimodal alignment of LMMs in real-world generation scenar- ios, placing particular emphasis on penalizing any hallucinations. We create a set of varied bench- mark questions that cover the 12 main object categories in COCO (Lin et al., 2014) and include 8 dif- ferent task types, leading to MMHAL-BENCH. Our evaluation indicates that this benchmark dataset aligns well with human evaluations, especially when scores are adjusted for anti-hallucinations. In our experimental evaluation, as the first LMM trained with RLHF, LLaVA-RLHF delivers impres- sive outcomes. We observed a notable enhancement on LLaVA-Bench, achieving 94%, an improve- ment by 60% in MMHAL-BENCH, and established new performance benchmarks for LLaVA with a 52.4% score on MMBench (Liu et al., 2023b) and an 82.7% F1 on POPE (Li et al., 2023d). We have made our code, model, and data publicly available at https://llava-rlhf.github.io. 3 # Preprint | 2309.14525#4 | 2309.14525#6 | 2309.14525 | [
"2302.13971"
] |
2309.14525#6 | Aligning Large Multimodal Models with Factually Augmented RLHF | Instruction We have developed an AI assistant adept at facilitating image-based conversations. However, it oc- casionally generates what we call hallucinations, which are inaccuracies unsupported by the image content or real-world knowledge. In this task, we request that you select the most appropriate response from the AI model based on the conversation context. When making this selection, primarily consider these two factors: â ¢ Honesty: Fundamentally, the AI should provide accurate information and articulate its uncer- tainty without misleading the user. If one response includes hallucination and the other doesnâ t, or if both responses contain hallucinations but one does to a greater extent, you should opt for the more honest response. â ¢ Helpfulness: In scenarios where both responses are free from hallucinations, you should opt for the more helpful one. The AI should attempt to accomplish the task or answer the question posed, provided itâ s not harmful, in the most helpful and engaging manner possible. Annotation Task Please select the better response from A and B [IMAGE] [CONVERSATION CONTEXT] [RESPONSE A] [RESPONSE B] Question 1: Which response has fewer hallucinations in terms of the given image? Question 2: If you have selected a tie between Response 1 and Response 2 from the previous question, which response would be more helpful or less incorrect? Table 2: The instruction to the crowdworkers for human preference collection. 2 METHOD 2.1 MULTIMODAL RLHF Reinforcement Learning from Human Feedback (RLHF) (Ziegler et al., 2019; Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022a) has emerged as a powerful and scalable strategy for aligning Large Language Models (LLMs) with human values. In this work, we use RLHF to align LMMs. The basic pipeline of our multimodal RLHF can be summarized into three stages: Multimodal Supervised Fine-Tuning A vision encoder and a pre-trained LLM are jointly fine- tuned on an instruction-following demonstration dataset using token-level supervision to produce a supervised fine-tuned (SFT) model Ï SFT. Multimodal Preference Modeling In this stage, a reward model, alternatively referred to as a preference model, is trained to give a higher score to the â betterâ response. | 2309.14525#5 | 2309.14525#7 | 2309.14525 | [
"2302.13971"
] |
2309.14525#7 | Aligning Large Multimodal Models with Factually Augmented RLHF | The pairwise comparison training data are typically annotated by human annotators. Formally, let the aggregated preference data be represented as DRM = {(I, x, y0, y1, i)}, where I denotes the image, x denotes the prompt, y0 and y1 are two associated responses, and i indicates the index of the preferred response. The reward model employs a cross-entropy loss function: L(rθ) = â E(I,x,y0,y1,i)â ¼DRM [log Ï (rθ(I, x, yi) â rθ(I, x, y1â i))] . | 2309.14525#6 | 2309.14525#8 | 2309.14525 | [
"2302.13971"
] |
2309.14525#8 | Aligning Large Multimodal Models with Factually Augmented RLHF | Reinforcement Learning Here, a policy model, initialized through multimodal supervised fine- tuning (SFT) (Ouyang et al., 2022; Touvron et al., 2023b), is trained to generate an appropriate response for each user query by maximizing the reward signal as provided by the reward model. To address potential over-optimization challenges, notably reward hacking, a per-token KL penalty derived from the initial policy model (Ouyang et al., 2022) is sometimes applied. Formally, given the set of collected images and user prompts, DRL = {(I, x)}, along with the fixed initial policy model Ï INIT and the RL-optimized model Ï RL 7", the full optimization loss is articulated as: x,y) â B- Dac (7 (y|Z, x) ||" (y|Z, L(mp) = -Ee)eDar.y~n Ph (ylz.e) [ro(Z, x,y) â B- Dac (7 (y|Z, x) ||" (y|Z, x) | in) where β is the hyper-parameter to control the scale of the KL penalty. 4 Preprint 2.2 AUGMENTING LLAVA WITH HIGH-QUALITY INSTRUCTION-TUNING Recent studies (Zhou et al., 2023; Touvron et al., 2023b) show that high-quality instruction tuning data is essential for aligning Large Language Models (LLMs). We find this becomes even more salient for LMMs. As these models traverse vast textual and visual domains, clear tuning instructions are crucial. Correctly aligned data ensures models produce contextually relevant outputs, effectively bridging language and visual gaps. For example, LLaVA synthesized 150k visual instruction data using the text-only GPT-4, where an image is represented as the associated captions on bounding boxes to prompt GPT-4. Though careful filtering has been applied to improve the quality, the pipeline can occasionally generate visually misaligned instruction data that can not be easily removed with an automatic filtering script, as highlighted in Table 1. In this work, we consider enhancing LLaVA (98k conversations, after holding out 60k conversa- tions for preference modeling and RL training) with high-quality instruction-tuning data derived from existing human annotations. Specifically, we curated three categories of visual instruction data: â | 2309.14525#7 | 2309.14525#9 | 2309.14525 | [
"2302.13971"
] |
2309.14525#9 | Aligning Large Multimodal Models with Factually Augmented RLHF | Yesâ or â Noâ queries from VQA-v2 (83k) (Goyal et al., 2017b), multiple-choice questions from A-OKVQA (16k) (Marino et al., 2019), and grounded captions from Flickr30k (23k) (Young et al., 2014a). Our analysis revealed that this amalgamation of datasets significantly improved LMM capabilities on benchmark tests. Impressively, these results surpassed models (Dai et al., 2023; Li et al., 2023a; Laurenc¸on et al., 2023) trained on datasets an order of magnitude larger than ours, as evidenced by Table 7 and 4. For a comprehensive breakdown of each datasetâ s influence, refer to Section 3.5. 2.3 HALLUCINATION-AWARE HUMAN PREFERENCE COLLECTION Inspired by the recent RLHF studies that collect helpfulness and harmlessness preferences (Bai et al., 2022b; Touvron et al., 2023b) separately, in this study, we decide to differentiate between responses that are merely less helpful and those that are inconsistent with the images (often characterized by multimodal hallucinations). To achieve this, we provide crowdworkers with the template illustrated in Table 2 to guide their annotations when comparing two given responses. With our current template design, we aim to prompt crowdworkers to identify potential hallucinations in the modelâ s responses. Nonetheless, our training process integrates a single reward model that emphasizes both multimodal alignment and overall helpfulness2. We collect human preferences on 10k hold-out LLaVA data by re-sampling the last response with our SFT model and a temperature of 0.7. The reward model is initialized from the SFT model to obtain the basic multimodal capabilities. 2.4 FACTUALLY AUGMENTED RLHF (FACT-RLHF) We conduct multimodal RLHF on 50k hold-out LLaVA conversations, with additional 12k multi- choice questions from A-OKVQA and 10k yes/no questions subsampled from VQA-v2. Due to the concerns of existing hallucinations in the synthetic multi-round conversation data of LLaVA, we only use the first question in each conversation for RL training, which avoids the pre-existing hallucinations in the conversational context. | 2309.14525#8 | 2309.14525#10 | 2309.14525 | [
"2302.13971"
] |
2309.14525#10 | Aligning Large Multimodal Models with Factually Augmented RLHF | Reward Hacking in RLHF In preliminary multimodal RLHF experiments, we observe that due to the intrinsic multimodal misalignment in the SFT model, the reward model is weak and sometimes cannot effectively detect hallucinations in the RL modelâ s responses. In the text domain, previous work (Bai et al., 2022a; Touvron et al., 2023b) proposed to iteratively collect â freshâ human feed- back. However, this can be quite costly and cannot effectively utilize existing human-annotated data and there is no guarantee that more preference data can significantly improve the discriminative capabilities of the reward model for multimodal problems. Facutual Augmentation To augment the capability of the reward model, we propose Factually Augmented RLHF (Fact-RLHF), where the reward model has access to additional ground-truth 2We are considering the development of a distinct Honest reward model, inspired by the approach in Tou- vron et al. (2023b). This introduces the possibility of constructing a piecewise Honesty-prioritized reward model. We earmark this direction for future exploration. | 2309.14525#9 | 2309.14525#11 | 2309.14525 | [
"2302.13971"
] |
2309.14525#11 | Aligning Large Multimodal Models with Factually Augmented RLHF | 5 # Preprint information such as image captions to calibrate its judgment. In original RLHF (Stiennon et al., 2020; OpenAI, 2022), the reward model needs to judge the quality of the response only based on the user query (i.e., the input image and prompt): Image: [IMAGE] User: [USER PROMPT] Assistant: [RESPONSE] Reward Model: [SCORE] In Factually Augmented RLHF (Fact-RLHF), the reward model has additional information about the textual descriptions of the image: Image: [IMAGE] Factual Information: [5 COCO IMAGE CAPTIONS / 3 A-OKVQA RATIONALS] User: [USER PROMPT] Assistant: [RESPONSE] Augmented Reward Model: [SCORE] This prevents the reward model hacked by the policy model when the policy model generates some hallucinations that are clearly not grounded by the image captions. For general questions with COCO images, we concatenate the five COCO captions as the additional factual information, while for A-OKVQA questions, we use the annotated rationals as the factual information. The factually augmented reward model is trained on the same binary preference data as the vanilla reward model, except that the factual information is provided both during the model fine-tuning and inference. Symbolic Rewards: Correctness Penalty & Length Penalty In some of our RL data, certain questions come with a predetermined ground-truth answer. This includes binary choices (e.g., â Yes/Noâ ) in VQA-v2 and multiple-choice options (e.g., â ABCDâ ) in A-OKVQA. These annota- tions can also be regarded as additional factual information. Therefore, in the Fact-RLHF algorithm, we further introduce a symbolic reward mechanism that penalizes selections that diverge from these ground-truth options. Furthermore, we observed that RLHF-trained models often produce more verbose outputs, a phe- nomenon also noted by Dubois et al. (2023). While these verbose outputs might be favored by users or by automated LLM-based evaluation systems (Sun et al., 2023b; Zheng et al., 2023), they tend to introduce more hallucinations for LMMs. In this work, we follow Sun et al. (2023a) and incorporate the response length, measured in the number of tokens, as an auxiliary penalizing factor. | 2309.14525#10 | 2309.14525#12 | 2309.14525 | [
"2302.13971"
] |
2309.14525#12 | Aligning Large Multimodal Models with Factually Augmented RLHF | 3 EXPERIMENTS 3.1 NEURAL ARCHITECTURES Base Model We adopt the same network architecture as LLaVA (Liu et al., 2023a). Our LLM is based on Vicuna (Touvron et al., 2023a; Chiang et al., 2023), and we utilize the pre-trained CLIP visual encoder, ViT-L/14 (Radford et al., 2021). We use grid features both before and after the final Transformer layer. To project image features to the word embedding space, we employ a linear layer. | 2309.14525#11 | 2309.14525#13 | 2309.14525 | [
"2302.13971"
] |
2309.14525#13 | Aligning Large Multimodal Models with Factually Augmented RLHF | Itâ s important to note that we leverage the pre-trained checkpoints of the linear projection matrix from LLaVA, concentrating on the end-to-end fine-tuning phase for multi-modal alignment in our study. For LLaVA-SFT+-7b, we use a Vicuna-V1.5-7b LLM and ViT-L/14 with image resolution 256 Ã 256. For LLaVA-SFT+-13b, we use a Vicuna-V1.5-13b LLM and ViT-L/14 with image resolution 336 Ã 336. RL Models: Reward, Policy, and Value The architecture of the reward model is the same as the base LLaVA model, except that the embedding output of the last token is linearly projected to a scalar value to indicate the reward of the whole response. Following Dubois et al. (2023), we initialize the value model from the reward model. Therefore, when training an LLaVA-7B-based policy model with an LLavA-13B-based reward model, the value model is also of 13B size. To fit all the models (i.e., police, reward, value, original policy) into one GPU, we adopt LoRA (Hu et al., 2021) for all the fine-tuning processes in RLHF. We use Proximal Policy Optimization (PPO; | 2309.14525#12 | 2309.14525#14 | 2309.14525 | [
"2302.13971"
] |
2309.14525#14 | Aligning Large Multimodal Models with Factually Augmented RLHF | 6 Preprint Table 3: Automatic evaluation of LLaVA-RLHF on the LLaVA-Bench Evaluation. GPT-4 compares the answers from the VLM model outputs with the answers by GPT-4 (text-only) and gives a rating. We report the relative scores (Liu et al., 2023a) of VLM models compared to GPT-4 (text-only). Subsets Model Conv Detail Complex Full-Set LLaVA7B VIGC7B LLaVA-SFT+ 7B LLaVA-RLHF7B 75.1 83.3 88.8 93.0 75.4 80.6 74.6 79.0 92.3 93.1 95.0 109.5 81.0 85.8 86.3 94.1 LLaVA13BX336 VIGC13BX336 LLaVA-SFT+ 13Bà 336 LLaVA-RLHF13Bà 336 87.2 88.9 85.8 93.9 74.3 77.4 75.5 82.5 92.9 93.5 93.9 110.1 84.9 86.8 85.2 95.6 Overall Adversatial listic Comparison Counting Relation â â IDEFICS93 | â â Kosmos-2 â LLaVArsz2.336 â â IDEFICSgo3 = InstructBLIP;33 © â LLaVA-RLHF, 35 Figure 2: Detailed performance of different models on the eight categories in MMHAL-BENCH, where â Overallâ indicates the averaged performance across all categories. The questions are col- lected by adversarially filtering on the original LLaVA13BX336 model. Schulman et al. (2017)) with a KL penalty for the RL training. Without further notice, both LLaVA- RLHF-7b and LLaVA-RLHF-13b are trained with a LLaVA-SFT+-13b initialized reward model. More details can be found in Appendix F. # 3.2 MMHAL-BENCH DATA COLLECTION To quantify and evaluate the hallucination in LMM responses, we have created a new benchmark MMHAL-BENCH. | 2309.14525#13 | 2309.14525#15 | 2309.14525 | [
"2302.13971"
] |
2309.14525#15 | Aligning Large Multimodal Models with Factually Augmented RLHF | There are two major differences between MMHAL-BENCH and previous VLM benchmarks: 1) Speciality: In contrast to prevalent LMM benchmarks Liu et al. (2023a;b); Li et al. (2023d) that evaluate the response quality in the general sense (e.g., helpfulness, relevance), we focus on determining whether there hallucination exists in the LMM responses. Our evaluation metrics are directly developed on this main criterion. 2) Practicality: Some previous LMM bench- marks Li et al. (2023d); Rohrbach et al. (2018) also examine hallucination, but they have limited the questions to yes/no questions, which we found the results may sometimes disagree with the de- tailed description generated by LMM. Instead of over-simplifying the questions, we adopt general, realistic, and open-ended questions in our MMHAL-BENCH, which can better reflect the response quality in practical user-LMM interactions. 7 # Preprint In MMHAL-BENCH, we have meticulously designed 96 image-question pairs, ranging in 8 question categories à 12 object topics. More specifically, we have observed that LMM often make false claims about the image contents when answering some types of questions, and thus design our questions according to these types: ⠢ Object attribute: LMMs incorrectly describe the visual attributes of invididual objects, such as color and shape. ⠢ Adversarial object: LMMs answers questions involving something that does not exist in the image, instead of pointing out that the referred object cannot be found. Comparison: LMMs incorrectly compare the attributes of multiple objects. ⠢ Counting: LMMs fail to count the number of the named objects. ⠢ Spatial relation: LMMs fail to understand the spatial relations between multiple objects in the response. Environment: LMMs make wrong inference about the environment of the given image. ⠢ Holistic description: LMMs make false claims about contents in the given image when giving a comprehensive and detailed description of the whole image. ⠢ Others: LMMs fail to recognize the text or icons, or incorrectly reason based on the observed visual information. We create and filter the questions in an adversarial manner. More specifically, we design the image- question pairs to ensure that the original LLaVA13BX336 model hallucinates when answering these questions. | 2309.14525#14 | 2309.14525#16 | 2309.14525 | [
"2302.13971"
] |
2309.14525#16 | Aligning Large Multimodal Models with Factually Augmented RLHF | While these questions are initially tailored based on LLaVA13BX336â s behavior, we have observed that they also have a broader applicability, causing other LMMs to hallucinate as well. To avoid data leakage or evaluation on data that LMMs have observed during training, we select im- ages from the validation and test sets of OpenImages (Kuznetsova et al., 2020) and design all brand- new questions. Our image-question pairs cover 12 common object meta-categories from COCO (Lin et al., 2014), including â | 2309.14525#15 | 2309.14525#17 | 2309.14525 | [
"2302.13971"
] |
2309.14525#17 | Aligning Large Multimodal Models with Factually Augmented RLHF | accessoryâ , â animalâ , â applianceâ , â electronicâ , â foodâ , â furnitureâ , â in- doorâ , â kitchenâ , â outdoorâ , â personâ , â sportsâ , and â vehicleâ . When evaluating LMMs on MMHAL-BENCH, we employ the powerful GPT-4 model (OpenAI, 2023) to analyze and rate the responses. Currently, the publically available GPT-4 API only sup- ports text input, so it cannot judge directly based on the image contents. Therefore, to aid GPT-4â s assessment, we also provide category names of the image content, and a standard human-generated answer in the prompt, in addition to the question and LMM response pair. Consequently, GPT-4 can determine whether hallucination exists in the LMM response by comparing it against the image content and the thorough human-generated answer. When provided with adequate information from MMHAL-BENCH, GPT-4 can make reasonable decisions aligned with human judgments. For exam- ple, when deciding whether hallucination exists in responses from LLaVA13BX336 and IDEFICS80B, GPT-4 agrees with human judgments in 94% of the cases. Please see the Appendix for the example image-question pairs and GPT-4 prompts we used for MMHAL-BENCH evaluation. 3.3 RESULTS We use LLaVA-Bench (Liu et al., 2023a) and our MMHAL-BENCH as our main evaluation met- rics for their high alignment with human preferences. In addition, we conducted tests on widely- recognized Large Multimodal Model benchmarks. We employed MMBench (Liu et al., 2023b), a multi-modal benchmark offering an objective evaluation framework comprising 2,974 multiple- choice questions spanning 20 ability dimensions. This benchmark utilizes ChatGPT to juxtapose model predictions against desired choices, ensuring an equitable assessment of VLMs across vary- ing instruction-following proficiencies. Furthermore, we incorporated POPE (Li et al., 2023d), a polling-based query technique, to offer an evaluation of Large Multimodal Model object perception tendencies. High-quality SFT data is crucial for capability benchmarks. | 2309.14525#16 | 2309.14525#18 | 2309.14525 | [
"2302.13971"
] |
2309.14525#18 | Aligning Large Multimodal Models with Factually Augmented RLHF | By delving into the specific per- formances for the capability benchmarks (i.e., MMBench and POPE), we observe a notable im- provement in capabilities brought by high-quality instruction-tuning data (LLaVA-SFT+) in Ta- bles 4 and 7. LLaVA-SFT+ 7B model exemplifies this with an impressive performance of 52.1% on MMBench and an 82.7% F1 score on POPE, marking an improvement over the original LLaVA by margins of 13.4% and 6.7% respectively. | 2309.14525#17 | 2309.14525#19 | 2309.14525 | [
"2302.13971"
] |
2309.14525#19 | Aligning Large Multimodal Models with Factually Augmented RLHF | However, itâ s worth noting that LLaVA-SFT+ does 8 Preprint Table 4: CircularEval multi-choice accuracy results on MMBench dev set. We adopt the following abbreviations: LR for Logical Reasoning; AR for Attribute Reasoning; RR for Relation Reason- ing; FP-C for Fine-grained Perception (Cross Instance); FP-S for Fine-grained Perception (Single Instance); CP for Coarse Perception. Baseline results are taken from Liu et al. (2023b). LLM Data Overall LR AR RR FP-S FP-C CP OpenFlamingo9B MiniGPT-47B LLaMA-Adapter7B Otter-I9B Shikra7B Kosmos-2 InstructBLIP7B IDEFICS9B IDEFICS80B InstructBLIP13B LLaVA7B LLaVA-SFT+ 7B LLaVA-RLHF7B LLaVA13BÃ 336 LLaVA-SFT+ 13BÃ 336 LLaVA-RLHF13BÃ | 2309.14525#18 | 2309.14525#20 | 2309.14525 | [
"2302.13971"
] |
2309.14525#20 | Aligning Large Multimodal Models with Factually Augmented RLHF | 336 - 5k 52k 2.8M 5.5M 14M 1.2M 1M 1M 1.2M 158k 220k 280k 158k 220k 280k 6.6 24.3 41.2 51.4 58.8 59.2 36.0 48.2 54.6 44.0 38.7 52.1 51.4 47.5 57.5 60.1 4.2 7.5 11.7 32.5 25.8 46.7 14.2 20.8 29.0 19.1 16.7 28.3 24.2 23.3 25.8 29.2 15.4 31.3 35.3 56.7 56.7 55.7 46.3 54.2 67.8 54.2 48.3 63.2 63.2 59.7 65.7 67.2 0.9 4.3 29.6 53.9 58.3 43.5 22.6 33.0 46.5 34.8 30.4 37.4 39.1 31.3 54.8 56.5 8.1 30.3 47.5 46.8 57.2 64.3 37.0 47.8 56.0 47.8 45.5 53.2 50.2 41.4 57.9 60.9 1.4 9.0 38.6 38.6 57.9 49.0 21.4 36.6 48.0 24.8 32.4 35.9 40.0 38.6 51.0 53.8 5.0 35.6 56.4 65.4 75.8 72.5 49.0 67.1 61.9 56.4 40.6 66.8 66.1 65.8 68.5 71.5 | 2309.14525#19 | 2309.14525#21 | 2309.14525 | [
"2302.13971"
] |
2309.14525#21 | Aligning Large Multimodal Models with Factually Augmented RLHF | trail behind models like Kosmos and Shikra. Despite this, LLaVA-SFT+ stands out in terms of sample efficiency, utilizing only 280k fine-tuning dataâ a 5% fraction of whatâ s employed by the aforementioned models. Furthermore, this enhancement isnâ t confined to just one model size. When scaled up, LLaVA-SFT+ 13BX336 achieves commendable results, attaining 57.5% on MMBench and 82.9% on POPE. Comparatively, the effect of RLHF on the capability benchmarks is more mixed. LLaVA-RLHF shows subtle degradations at the 7b scale, but the 13b LLaVA-RLHF improves over LLaVA-SFT+ by 3% on MMBench. This phenomenon is similar to the Alignment Tax observed in previous work (Bai et al., 2022a). Nonetheless, with our current empirical scaling law of LLaVA- RLHF, we believe RLHF alignment would not damage in general capabilities of LMMs for models of larger scales. RLHF improves human alignment benchmarks further. From another angle, even though high- quality instruction data demonstrates large gains in capability assessment, it does not improve much on human-alignment benchmarks including LLaVA-Bench and MMHAL-BENCH, which is also evident in recent LLM studies (Wang et al., 2023). LLaVA-RLHF show a significant improvement in aligning with human values. It attains scores of 2.05 (7b) and 2.53 (13b) on MMHAL-BENCH and improves LLaVA-SFT+ by over 10% on LLaVA-Bench. We also presented qualitative examples in Table 1, which shows LLaVA-RLHF produces more reliable and helpful outputs. 3.4 ABLATION ANALYSIS We conduct ablation studies on LLaVA7B and evaluate over the four aforementioned benchmarks. 3.5 ABLATION ON HIGH-QUALITY INSTRUCTION-TUNING DATA In Table 5, we evaluate the impact of individual instruction-tuning datasets. For the sake of sim- plicity, we did not adjust the mixture rate, earmarking that consideration for future research. | 2309.14525#20 | 2309.14525#22 | 2309.14525 | [
"2302.13971"
] |
2309.14525#22 | Aligning Large Multimodal Models with Factually Augmented RLHF | Our findings indicate that A-OKVQA (Schwenk et al., 2022) contributes significantly to performance enhancements, boosting results by +9.8% on MMBench and a more modest +3.8% on POPE. In contrast, VQA-v2 (Goyal et al., 2017a) is particularly influential on POPE, where it leads to a 6% improvement, while only having a slight impact on MMBench. This differential can possibly be attributed to the overlapping â Yes/Noâ format in VQA and the multiple-choice structure of A- OKVQA. Flickr30k notably enhances the performance in LLaVA-Bench and MMHAL-BENCH â a 9 Preprint Table 5: Abalation studies on methodologies (SFT, RLHF, and Fact-RLHF), data mixtures (LLaVa with additional datasets), and model sizes of the policy model (PM) and the reward model (RM). SFT Data Method SFT SFT SFT SFT SFT PM RM 7b 7b 7b 7b 7b - - - - - VQA AOK Flickr â â â â â â â â â â â â â â â MMBench 38.7 42.9 48.5 37.8 52.1 POPE LLaVA-B MMHAL-B 76.0 82.0 79.8 77.6 82.7 81.0 30.4 34.7 46.6 86.3 1.3 2.0 1.1 1.5 1.8 RLHF RLHF RLHF Fact-RLHF 7b 7b 7b 7b 7b 7b 13b 13b â â â â â â â â â â â â 40.0 50.8 48.9 51.4 78.2 82.7 82.7 81.5 85.4 87.8 93.4 94.1 1.4 1.8 1.8 2.1 | 2309.14525#21 | 2309.14525#23 | 2309.14525 | [
"2302.13971"
] |
2309.14525#23 | Aligning Large Multimodal Models with Factually Augmented RLHF | likely consequence of the inherently grounded nature of the task. Furthermore, amalgamating these three datasets results in compounded performance gains across various capability benchmarks. 3.6 ABLATION ON FACT-AUGMENTED RLHF We compare the performance of Fact-Augmented RLHF (Fact-RLHF) with standard RLHF in Ta- ble 5. Our findings indicate that while the conventional RLHF exhibits improvement on LLaVA- Bench, it underperforms on MMHAL-BENCH. This can be attributed to the modelâ s tendency, during PPO, to manipulate the naive RLHF reward model by producing lengthier responses rather than ones that are less prone to hallucinations. On the other hand, our Fact-RLHF demonstrates en- hancements on both LLaVA-Bench and MMHAL-BENCH. This suggests that Fact-RLHF not only better aligns with human preferences but also effectively minimizes hallucinated outputs. 3.7 DATA FILTERING V.S. RLHF In our preliminary tests, we employed the Fact-RLHF reward model to filter out 70%, 50%, and 30% of LLaVA data. Subsequently, we finetuned an LLaVA model on this filtered data, yielding scores of 81.2, 81.5, and 81.8 on LLaVA-Bench. However, performance on MMHAL-BENCH , POPE, and MMBench remained largely unchanged. We believe this stagnation can be attributed to two factors: the absence of a negative feedback mechanism preventing the model from identifying hallucinations in its output, and the potential limitations of our Fact-RLHF reward model, especially when compared against the high-capacity oracle models in previous successful studies (Touvron et al., 2023b). # 4 RELATED WORK | 2309.14525#22 | 2309.14525#24 | 2309.14525 | [
"2302.13971"
] |
2309.14525#24 | Aligning Large Multimodal Models with Factually Augmented RLHF | Large Multimodal Models Recent success in Large Language Models (LLMs) such as GPTs (Brown et al., 2020; OpenAI, 2023), PaLM (Chowdhery et al., 2022; Anil et al., 2023), BLOOM (Scao et al., 2022; Muennighoff et al., 2022), LLaMA (Touvron et al., 2023a;b), Al- paca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) has spurred significant improvements in multi-modal models. Flamingo (Alayrac et al.) pioneered integrating LLMs into vision-language pretraining, utilizing gated cross-attention dense blocks to adapt to visual features; its open-source variant is OpenFlamingo (Awadalla et al., 2023) and IDEFICS (Laurenc¸on et al., 2023). PaLI (Chen et al., 2022; 2023b) studies the scaling factor of V&L components across a wide range of tasks. PaLM-E(Driess et al., 2023) further extends LMM to the embodied domain. BLIP-2 (Li et al., 2023c) introduced the Querying Transformer (Q-former) to bridge the gap between image and lan- guage encoders, which was further improved by InstructBLIP (Dai et al., 2023). Otter (Li et al., 2023b;a) focuses on enhancing OpenFlamingoâ s instruction-following capability. MiniGPT-4 (Zhu et al., 2023) suggests GPT4â s prowess is due to sophisticated LLMs and recommends using a sin- gle project layer to align visual and linguistic models. It showcases abilities akin to GPT4 but is computationally efficient. mPLUG-Owl (Ye et al., 2023) offers a new approach: initially aligning | 2309.14525#23 | 2309.14525#25 | 2309.14525 | [
"2302.13971"
] |
2309.14525#25 | Aligning Large Multimodal Models with Factually Augmented RLHF | 10 # Preprint visual features and then fine-tuning the language model using LoRA (Hu et al., 2021). Recently, QWen-VL (Bai et al., 2023) scales the pre-training of LMM to 1.4B data and achieves impressive results across benchmarks. Among them, LLaVA (Liu et al., 2023a; Lu et al., 2023) pioneered LMM work by harnessing GPT4 (OpenAI, 2023) for generating vision-language tuning datasets similar to text instruction efforts (Wei et al., 2021; Chung et al., 2022; Longpre et al., 2023; Sanh et al., 2021; Mukherjee et al., 2023; Taori et al., 2023; K¨opf et al., 2023). However, due to the syntactic nature of these generated datasets, misalignments between image and text modalities are prevalent. Our research is the first to address this misalignment through RLHF. Hallucination Prior to the advent of LLMs, the NLP community primarily defined â hallucinationâ as the generation of nonsensical content or content that deviates from its source (Ji et al., 2023). The introduction of versatile LLMs has expanded this definition, as outlined by (Zhang et al., 2023) into: 1) Input-conflicting hallucination, which veers away from user-given input, exemplified in machine translation (Lee et al., 2018; Zhou et al., 2020); 2) Context-conflicting hallucination where output contradicts prior LLM-generated information (Shi et al., 2023); and 3) Fact-conflicting hallucina- tion, where content misaligns with established knowledge (Lin et al., 2021). Within the LMM realm, â object hallucinationâ is well-documented (Rohrbach et al., 2018; MacLeod et al., 2017; Li et al., 2023d; Biten et al., 2022), referring to models producing descriptions or captions including objects that donâ t match or are missing from the target image. | 2309.14525#24 | 2309.14525#26 | 2309.14525 | [
"2302.13971"
] |
2309.14525#26 | Aligning Large Multimodal Models with Factually Augmented RLHF | We expand on this, encompassing any LMM- generated description unfaithful to image aspects, including relations, attributes, environments, and so on. Consequently, we present MMHAL-BENCH, aiming to holistically pinpoint and measure hallucinations in LMMs. # 5 DISCUSSIONS & LIMITATIONS Hallucination phenomena are observed in both Large Language Models (LLMs) and Large Multi- modal Models (LMMs). The potential reasons are two-fold. Firstly, a salient factor contributing to this issue is the low quality of instruction tuning data for current LMMs, as they are typically synthesized by more powerful LLMs such as GPT-4. We expect our proposed high-quality vision instruction-tuning data and future efforts on manually curating high-quality vision instruction tuning data can alleviate this problem. Secondly, the adoption of behavior cloning training in instruction-tuned LMMs emerges as another fundamental cause (Schulman, 2023). Since the instruction data labelers lack insight into the LMMâ s visual perception of an image, such training inadvertently conditions LMMs to speculate on uncer- tain content. To circumvent this pitfall, the implementation of reinforcement learning-based training provides a promising avenue, guiding the model to articulate uncertainties more effectively (Lin et al., 2022; Kadavath et al., 2022). Our work demonstrates a pioneering effort in this direction. Figure 3 illustrates the two sources of hallucination in current behavior cloning training of LLMs. However, while LLaVA-RLHF enhances human alignment, reduces hallucination, and encourages truthfulness and calibration, applying RLHF can inadvertently dampen the performance of small- sized LMMs. Balancing alignment enhancements without compromising the capability of LMM and LLM is still an unresolved challenge. Furthermore, though weâ ve demonstrated the effective use of linear projection in LLaVA with top-tier instruction data, determining an optimal mixture and scaling it to bigger models remains intricate. Our research primarily delves into the fine-tuning phase of VLMs, leaving the issues of misalignment in other modalities and during pre-training yet to be explored. Finally, while MMHAL-BENCH emphasizes the evaluation of LMMs with an aim to curtail hal- lucinations, it is noteworthy that short or evasive responses can inadvertently attain high scores on MMHAL-BENCH. | 2309.14525#25 | 2309.14525#27 | 2309.14525 | [
"2302.13971"
] |
2309.14525#27 | Aligning Large Multimodal Models with Factually Augmented RLHF | This underlines an intrinsic trade-off between honesty and helpfulness (Bai et al., 2022a). Consequently, for a more comprehensive assessment of alignment with human pref- erences, we advocate for the evaluation of prospective LMMs using both MMHAL-BENCH and LLaVA-Bench. 11 Preprint # 6 CONCLUSION We proposed several strategies to tackle the multimodal misalignment problems, particularly for vision language models (VLMs), which often produce text inconsistent with the associated images. First, we enrich GPT-4 generated vision instruction tuning data from LLaVA with existing human- authored image-text pairs. Next, we adopt the Reinforcement Learning from Human Feedback (RLHF) algorithm from the text domain to bridge vision-language gaps, wherein human evaluators discern and mark the more hallucinated output. We train the VLM to optimize against simulated human preferences. Moreover, we introduce the Factually Augmented RLHF, leveraging additional factual information such as image captions to enhance the reward model, countering reward hack- ing in RLHF, and boosting model performance. For tangible real-world impact assessment, we have devised MMHAL-BENCH, an evaluation benchmark targeting the penalization of hallucina- tion. Remarkably, LLaVA-RLHF, being the first VLM trained with RLHF, shows a notable surge in performance across benchmarks. We opensource our code, and data and hope our findings could help the future development of more reliable and human-aligned LLMs and LMMs. # REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. | 2309.14525#26 | 2309.14525#28 | 2309.14525 | [
"2302.13971"
] |
2309.14525#28 | Aligning Large Multimodal Models with Factually Augmented RLHF | Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open- arXiv preprint source framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. | 2309.14525#27 | 2309.14525#29 | 2309.14525 | [
"2302.13971"
] |
2309.14525#29 | Aligning Large Multimodal Models with Factually Augmented RLHF | Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Ols- son, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran- Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mer- cado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Con- erly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. | 2309.14525#28 | 2309.14525#30 | 2309.14525 | [
"2302.13971"
] |
2309.14525#30 | Aligning Large Multimodal Models with Factually Augmented RLHF | Constitutional ai: Harmlessness from ai feedback, 2022b. Ali Furkan Biten, Llu´ıs G´omez, and Dimosthenis Karatzas. Let there be a clock on the beach: In Proceedings of the IEEE/CVF Winter Reducing object hallucination in image captioning. Conference on Applications of Computer Vision, pp. 1381â 1390, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. | 2309.14525#29 | 2309.14525#31 | 2309.14525 | [
"2302.13971"
] |
2309.14525#31 | Aligning Large Multimodal Models with Factually Augmented RLHF | Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llmâ s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023a. 12 Preprint Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. PaLI: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022. Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Car- los Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, et al. Pali-x: On scaling up a multilingual vision and language model. arXiv preprint arXiv:2305.18565, 2023b. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //vicuna.lmsys.org. | 2309.14525#30 | 2309.14525#32 | 2309.14525 | [
"2302.13971"
] |
2309.14525#32 | Aligning Large Multimodal Models with Factually Augmented RLHF | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pel- lat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. | 2309.14525#31 | 2309.14525#33 | 2309.14525 | [
"2302.13971"
] |
2309.14525#33 | Aligning Large Multimodal Models with Factually Augmented RLHF | Palm-e: An embodied multi- modal language model. arXiv preprint arXiv:2303.03378, 2023. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023. | 2309.14525#32 | 2309.14525#34 | 2309.14525 | [
"2302.13971"
] |
2309.14525#34 | Aligning Large Multimodal Models with Factually Augmented RLHF | Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V In in VQA matter: Elevating the role of image understanding in visual question answering. Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904â 6913, 2017a. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904â 6913, 2017b. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, arXiv preprint and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv:2106.09685, 2021. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â 38, 2023. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language mod- els (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Sha- hab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. In- ternational Journal of Computer Vision, 128(7):1956â 1981, 2020. 13 | 2309.14525#33 | 2309.14525#35 | 2309.14525 | [
"2302.13971"
] |
2309.14525#35 | Aligning Large Multimodal Models with Factually Augmented RLHF | Preprint Andreas K¨opf, Yannic Kilcher, Dimitri von R¨utte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Rich´ard Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. Openassistant conversations â democratizing large language model align- ment, 2023. Hugo Laurenc¸on, Lucile Saulnier, L´eo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M Rush, Douwe Kiela, et al. | 2309.14525#34 | 2309.14525#36 | 2309.14525 | [
"2302.13971"
] |
2309.14525#36 | Aligning Large Multimodal Models with Factually Augmented RLHF | Obelisc: An open web-scale filtered dataset of interleaved image-text documents. arXiv preprint arXiv:2306.16527, 2023. Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. Hallucinations in neural machine translation. 2018. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning. 2023a. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. | 2309.14525#35 | 2309.14525#37 | 2309.14525 | [
"2302.13971"
] |
2309.14525#37 | Aligning Large Multimodal Models with Factually Augmented RLHF | Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023b. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- arXiv preprint image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023c. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023d. Stephanie Lin, Jacob Hilton, and Owain Evans. | 2309.14525#36 | 2309.14525#38 | 2309.14525 | [
"2302.13971"
] |
2309.14525#38 | Aligning Large Multimodal Models with Factually Augmented RLHF | Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021. Stephanie Lin, Jacob Hilton, and Owain Evans. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334, 2022. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr In Computer Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. Visionâ ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740â 755. Springer, 2014. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. 2023a. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023b. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. | 2309.14525#37 | 2309.14525#39 | 2309.14525 | [
"2302.13971"
] |
2309.14525#39 | Aligning Large Multimodal Models with Factually Augmented RLHF | The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, and Yelong Shen. An empirical study of scaling instruct-tuned large multimodal models. arXiv preprint arXiv:2309.09958, 2023. Haley MacLeod, Cynthia L Bennett, Meredith Ringel Morris, and Edward Cutrell. Understanding In pro- blind peopleâ s experiences with computer-generated captions of social media images. ceedings of the 2017 CHI conference on human factors in computing systems, pp. 5988â 5999, 2017. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual In Proceedings of the IEEE/cvf question answering benchmark requiring external knowledge. conference on computer vision and pattern recognition, pp. 3195â 3204, 2019. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslingual gen- eralization through multitask finetuning. arXiv preprint arXiv:2211.01786, 2022. | 2309.14525#38 | 2309.14525#40 | 2309.14525 | [
"2302.13971"
] |
2309.14525#40 | Aligning Large Multimodal Models with Factually Augmented RLHF | 14 Preprint Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023. OpenAI. OpenAI: Introducing ChatGPT, 2022. URL https://openai.com/blog/ chatgpt. OpenAI. Gpt-4 technical report, 2023. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. | 2309.14525#39 | 2309.14525#41 | 2309.14525 | [
"2302.13971"
] |
2309.14525#41 | Aligning Large Multimodal Models with Factually Augmented RLHF | Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748â 8763. PMLR, 2021. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156, 2018. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2021. | 2309.14525#40 | 2309.14525#42 | 2309.14525 | [
"2302.13971"
] |
2309.14525#42 | Aligning Large Multimodal Models with Factually Augmented RLHF | Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagn´e, Alexandra Sasha Luccioni, Franc¸ois Yvon, Matthias Gall´e, et al. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. John Schulman. Reinforcement learning from human feedback: Progress and challenges, Apr 2023. URL https://www.youtube.com/watch?v=hhiLw5Q_UFg&ab_channel= BerkeleyEECS. Berkeley EECS. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- arXiv preprint dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. In European A-okvqa: A benchmark for visual question answering using world knowledge. Conference on Computer Vision, pp. 146â 162. Springer, 2022. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen-tau Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652, 2023. | 2309.14525#41 | 2309.14525#43 | 2309.14525 | [
"2302.13971"
] |
2309.14525#43 | Aligning Large Multimodal Models with Factually Augmented RLHF | Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â 3021, 2020. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Self-alignment with principle-following reward models. personal com- munication, 2023a. Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023b. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. | 2309.14525#42 | 2309.14525#44 | 2309.14525 | [
"2302.13971"
] |
2309.14525#44 | Aligning Large Multimodal Models with Factually Augmented RLHF | 15 Preprint Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2021. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67â 78, 2014a. | 2309.14525#43 | 2309.14525#45 | 2309.14525 | [
"2302.13971"
] |
2309.14525#45 | Aligning Large Multimodal Models with Factually Augmented RLHF | Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67â 78, 2014b. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. | 2309.14525#44 | 2309.14525#46 | 2309.14525 | [
"2302.13971"
] |
2309.14525#46 | Aligning Large Multimodal Models with Factually Augmented RLHF | Sirenâ s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer, and Marjan Ghazvininejad. Detecting hallucinated content in conditional neural sequence generation. arXiv preprint arXiv:2011.02593, 2020. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. | 2309.14525#45 | 2309.14525#47 | 2309.14525 | [
"2302.13971"
] |
2309.14525#47 | Aligning Large Multimodal Models with Factually Augmented RLHF | Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 16 Preprint | 2309.14525#46 | 2309.14525#48 | 2309.14525 | [
"2302.13971"
] |
2309.14525#48 | Aligning Large Multimodal Models with Factually Augmented RLHF | A SOURCE OF MULTIMODAL HALLUCINATION Source of Hallucination in Behavior Cloning | Clear to Human Labeler = 08 => This image shows the menu of a coffee chop Hallucination can occur even called Rolyâ s Café. for high-quality vision instruction tuning data when human-labeled vision Human Annotators instruction tuning data does not align with the vision cognition of the MLMM agent @ itself, : = = Vague to LMM Q: What is the name of the shop? â A: Rolyâ s Café. (LMM can only learn to guess) Supervise Fine-Tuning (SFT) of LMM Agents Figure 3: Two sources of hallucination in Supervised Fine-Tuning (SFT): GPT-4 synthesized data contains hallucinations; Instruction data labelers have no insights about what LMMs know or see, which essentially teaches them to speculate on uncertain content (i.e. hallucinate). # B DETAILED EVALUATION RESULTS ON MMHAL-BENCH We include Table 6 for the full evaluation results on MMHAL-BENCH. Table 6: Detailed evaluation results for different LMMs on MMHAL-BENCH. LLM Overall Hallucination Score â Rate â Score in Each Question Type â | 2309.14525#47 | 2309.14525#49 | 2309.14525 | [
"2302.13971"
] |
2309.14525#49 | Aligning Large Multimodal Models with Factually Augmented RLHF | Attribute Adversarial Comparison Counting Relation Environment Holistic Other Kosmos-2 IDEFIC9B IDEFIC80B InstructBLIP7B InstructBLIP13B LLaVA7B LLaVA-SFT+ 7B LLaVA-RLHF7B LLaVA13BX336 LLaVA-SFT+ LLaVA-RLHF13B 13BX336 1.69 1.89 2.05 2.1 2.14 1.55 1.76 2.05 1.11 2.43 2.53 0.68 0.64 0.61 0.58 0.58 0.76 0.67 0.68 0.84 0.55 0.57 2 1.58 2.33 3.42 2.75 1.33 2.75 2.92 0.67 3.08 3.33 0.25 0.75 1.25 2.08 1.75 0 2.08 1.83 0 1.75 2.67 1.42 2.75 2 1.33 1.25 1.83 1.42 2.42 1.75 2.0 1.75 1.67 1.83 2.5 1.92 2.08 1.17 1.83 1.92 1.58 3.25 2.25 1.67 1.83 1.5 2.17 2.5 2 2.17 2.25 1.5 2.25 2.33 2.67 2.5 3.33 3.67 4.08 2.58 2.17 2.25 1.25 3.83 3.25 2.5 2.17 2.33 1.17 1.5 1.67 1.17 1.75 1.5 1.5 2.25 1.33 1.67 1.17 1.08 1.17 1.83 0.5 1.08 0.67 1.75 2.42 | 2309.14525#48 | 2309.14525#50 | 2309.14525 | [
"2302.13971"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.