id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.06391#29
Dynamic Planning with a LLM
Interact with a household to solve a task. Only reply with > followed by the action to take or 'think'. Do not apologize. Follow the format of the two examples below. Table 5: System Prompt used by gpt-3.5-turbo in our reproduction of ReAct task: put some peppershaker on drawer. Generated: (:goal (exists (?t - peppershaker ?r - drawer) (inReceptacle ?t ?r) )) task: put a clean mug in coffeemachine. Generated: (:goal (exists (?t - mug ?r - coffeemachine) (and (inReceptacle ?t ?r) (isClean ?t) ))) VALID â task: put two cd in safe. Generated: (:goal VALID â task: heat some mug and put it in coffeemachine. Generated: (:goal (exists (?t1 - cd ?t2 - cd ?r - safe) (exists (?m - mug ?c - coffeemachine) (and (inReceptacle ?t1 ?r) (inReceptacle ?t2 ?r) (not (= ?t1 ?t2)) (and (isReceptacle ?m) (isHot ?m) (inReceptacle ?m ?c) ))) VALID â ))) INVALID â Table 6: Sample of generated PDDL goals from LLM-DP. The generation gets confused by the semantics of â
2308.06391#28
2308.06391#30
2308.06391
[ "2303.11366" ]
2308.06391#30
Dynamic Planning with a LLM
receptacleâ and identifies a mug as a receptacle. While it is true that a mug is a receptacle, in our defined logic, receptacles are fixed, immovable objects which can contain other objects and therefore, a mug is not a Receptacle which leads the planning to fail subsequently.
2308.06391#29
2308.06391
[ "2303.11366" ]
2308.06394#0
Detecting and Preventing Hallucinations in Large Vision Language Models
3 2 0 2 g u A 8 1 ] V C . s c [ 2 v 4 9 3 6 0 . 8 0 3 2 : v i X r a # Detecting and Preventing Hallucinations in Large Vision Language Models # Anisha Gunjal*, Jihan Yin*, Erhan Basâ Scale AI {anisha.gunjal,jihan.yin,erhan.bas}@scale.com # Abstract Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answer- ing (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (Instruct- BLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M- HalDetect1, a Multimodal Hallucination Detection Dataset that can be used to train and benchmark models for halluci- nation detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine- grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sam- pling. We perform human evaluation on both FDPO and rejec- tion sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
2308.06394#1
2308.06394
[ "2302.04023" ]
2308.06394#1
Detecting and Preventing Hallucinations in Large Vision Language Models
Introduction Large language models (LLMs) have transformed the AI landscape in recent years, scaling their training data to tril- lions of tokens and their parameter count to hundreds of bil- lions(Brown et al. 2020; OpenAI 2023; Touvron et al. 2023). This has unlocked powerful emergent behaviors, and seen widespread adoption through the use of chat agents such as ChatGPT. Recently, advances in multi-modal models have seen adoption around grafting visual backbones onto pre- trained large language models, resulting in LVLMs(Liu et al. 2023b; Dai et al. 2023; Ye et al. 2023). While this has led to
2308.06394#0
2308.06394#2
2308.06394
[ "2302.04023" ]
2308.06394#2
Detecting and Preventing Hallucinations in Large Vision Language Models
These authors contributed equally. â Work done at ScaleAI 1Code and dataset will be publicly released strides in overall VQA performance, it brings along the same challenges that plague these LLMs - a significant one being the propensity to generate hallucinations. In language models, hallucinations occur when the model produces inaccurate or misleading factual information that cannot be supported by existing knowledge stores(Ji et al. 2023; Bang et al. 2023). In the context of VQA for LVLMs, hallucinations can manifest as responses containing refer- ences or descriptions of the input image that are incorrect(Li et al. 2023). It is essential to address and mitigate these hal- lucinations to enhance the reliability and accuracy of multi- modal models in real life usecases. However, these multi- modal hallucinations are hard to programatically detect and often requires human supervision, which can be costly. To facilitate automatic hallucination detection, We first build a diverse human-labeled dataset using VQA responses from InstructBLIP, as seen in Figure 1. We then train multi- ple reward models of various densities (sentence-level, sub- sentence level) on this dataset for hallucination detection. An effective way to use these reward models to reduce hallucina- tions is to use them to generate rewards in a reinforcement learning setup (Ziegler et al. 2019; Stiennon et al. 2020; Nakano et al. 2021), although the resulting final model can only be as effective as the original reward model used (Bai et al. 2022). Therefore, in this paper we focus on measuring the quality of these reward models, exploring classification metrics and using best-of-n rejection sampling as an approx- imation of the systemâ
2308.06394#1
2308.06394#3
2308.06394
[ "2302.04023" ]
2308.06394#3
Detecting and Preventing Hallucinations in Large Vision Language Models
s performance. Similar to (Rafailov et al. 2023), We also directly optimize InstructBLIP with fine-grained Direct Preference Optimization (FDPO), a novel variation of DPO in which we leverage fine grained annota- tion information from individual examples, rather than col- lecting relative preference signals from pairs of texts. Both methods show significant success in reducing hallucination rates from InstructBLIP, and furthermore, rejection sampling with our reward models reduces hallucination rates in other multi-modal models as well - LLaVA(Liu et al. 2023b) and mPLUG-OWL(Ye et al. 2023).
2308.06394#2
2308.06394#4
2308.06394
[ "2302.04023" ]
2308.06394#4
Detecting and Preventing Hallucinations in Large Vision Language Models
Our main contributions are as follows: 1. We create and release M-HalDetect, our hallucination detection dataset focused on fine-grained annotations over complex image descriptions at a sub-sentence level. 2. We show that InstructBLIP can be optimized using Fine- grained DPO (FDPO) over our M-HalDetect to reduce Question: <image> Provide an intricate description of the image, capturing its visual elements, including colors, Response 1 | The image depicts a busy city street filled with cars, traffic lights, and pedestrians. The street is lined with buildings on both sides of the road, creating a scenic urban environment. There are also trees visible in the background, adding to the natural beauty of the area. â The scene is characterized by the vibrant colors of the cars, as well as the various shapes and textures of the buildings and trees. No people are specifically mentioned in the image, but they can be seen in the crowd of pedestrians crossing the street. | Overall, the image captures the hustle and bustle of a busy city street, showcasing the diversity and vibrancy of urban life. Figure 1: Example Annotation from the M-HalDetect Dataset. The sub-sentences of text generated by multi-modal LM are tagged into categories: Accurate, Inaccurate, and Analysis.
2308.06394#3
2308.06394#5
2308.06394
[ "2302.04023" ]
2308.06394#5
Detecting and Preventing Hallucinations in Large Vision Language Models
hallucination rates by 41%. 3. We show that our reward models trained on this dataset can reduce hallucination rates by 55% in InstructBLIP with best-of-64 rejection sampling. The reward model gen- eralizes to other LVLMs, reducing hallucination rates in LLaVA and mPLUG-OWL by 15% and 57% respectively with best-of-16 sampling. 4. We show that our reward model is an effective evaluator of hallucination rates, giving scores aligned with human ratings. Related Work Large Vision Language Models (LVLMs) have seen perfor- mative advancements in tasks such as generating text from im- ages(Li 2023) and multi-modal in-context learning(Alayrac et al. 2022). Recent work has focused on utilizing instruction tuning techniques to enhance the zero-shot performance of instruction-aware LVLMs across different vision-language tasks(Liu et al. 2023b; Dai et al. 2023). These approaches utilize GPT-4 to generate multi-modal instruction tuning datasets(Liu et al. 2023b) where the image context is pro- vided to GPT-4 through symbolic representations of the im- age such as captions and object bounding boxes. Others com- bine datasets across various multi-modal tasks (Dai et al. 2023) with hand-crafted instructions, a method that has found success in training traditional LLMs(Wei et al. 2021). This achieves state of the art performance in a variety of multi- modal tasks, such as visual and video question answering, image captioning, and image classification. Nevertheless, a significant challenge associated with LVLMs has emerged: preventing hallucinations when gen- erating textual output. It is essential to address and mitigate these hallucinations to enhance the reliability and accuracy of LVLMs in production usecases. Hallucination Analysis in LVLMs In (Li et al. 2023), the evaluation metric â
2308.06394#4
2308.06394#6
2308.06394
[ "2302.04023" ]
2308.06394#6
Detecting and Preventing Hallucinations in Large Vision Language Models
POPEâ is proposed to evaluate hallucina- tions in LVLMs by polling questions about generated text. They observed that current state-of-the-art LVLM (Instruct- BLIP) has the lowest object hallucination rates among recent LVLMs. Another relevant contribution by Liu et al. (Liu et al. 2023a) is the introduction of the LRV dataset. This dataset contains positive and negative instructions specifically de- signed to enhance the robustness of LVLMs against hallu- cination and inconsistent text generation. Furthermore, they proposed a method called GAVIE, which leverages GPT-4 to assist in evaluating preferred answer generations. These studies collectively contribute to the understanding and mitigation of hallucination-related challenges in LVLMs, by providing evaluation metrics, datasets, and evaluation methods that enhance the reliability and consistency of text generation in multi-modal models. Our work extends the scope of the previous works by not only considering halluci- nations on the presence of objects, but also on descriptions of objects such as relative positioning or attributes. We also consider hallucinations on complex object reasoning. Aligning to Human Preferences Despite having strong zero-shot performance on classical language benchmark datasets, pre-trained LLMs still struggle to produce detailed generations on par with those written by real humans. Super- vised fine-tuning on demonstration data written by humans is not enough, where recent works have focused on using Reinforcement Learning with Human Feedback (RLHF) to address this problem(Stiennon et al. 2020; Touvron et al. 2023; Ouyang et al. 2022; OpenAI 2023). RLHF typically uses Proximal Policy Optimiza- tion(Schulman et al. 2017), to optimize a policy model with rewards from a reward model. This reward model is typically trained on preference pairs of same-prompt generations, often sourced from the base policy model. This preference is usually given by humans, though attempts have been made to use more traditional metrics such as BLEU(Papineni et al. 2002) and ROUGE(Ganesan 2018) as proxies. Using human preferences is more effective in aligning LLMs to human preferences(Stiennon et al. 2020), though sees mixed results in hallucination prevention.
2308.06394#5
2308.06394#7
2308.06394
[ "2302.04023" ]
2308.06394#7
Detecting and Preventing Hallucinations in Large Vision Language Models
Ouyang et al. (Ouyang et al. 2022) found that RLHF helps smaller (6B) language models reduce their hallucination rate, while having the opposite effect on larger models (175B). In this paper, we will focus on relatively smaller multi-modal models (7B) that can be more accessible to end users. DPO has emerged recently as a viable alternative to RLHF for preference alignment, optimizing the policy model di- rectly without needing to train a reward model and sample rewards through reinforcement learning(Rafailov et al. 2023). It has shown comparable performances with RLHF in sum- marization and chatbot usecases on language models, and maintains strong performance in higher temperature sam- pling. At the same time, it avoids the unstable and brittle process of training models with RL(Engstrom et al. 2020). Fine-grained Preferences A limitation of both RLHF and DPO is their lack of fine-grained interpretability regarding what makes one generation more preferred than the other. Recent research has made significant progress in leveraging fine-grained user preferences to improve the performance and interpretability of reward models. For example, Wu et al. (Wu et al. 2023) utilize fine-grained human feedback to train mul- tiple reward models at different density levels. These reward models covered passage level preferences as in the traditional RLHF setting, but also sentence level and sub-sentence level preferences in the form of error identification. (Lightman et al. 2023) employs process supervision, providing human feedback on individual steps for more robust rewards. To extend this fine-grained feedback mechanism into the multi-modal domain, we introduce a new dataset for multi- modal hallucination detection. Our dataset comprises of 4,000 images with 4 detailed descriptions each, for a total of 16,000 image description pairs, annotated at the sub-sentence level to indicate the accuracy of the generated descriptions. Similarly to (Wu et al. 2023), we train sub-sentence and sen- tence level reward models on this dataset. We also modify the DPO loss to utilize fine-grained annotations.
2308.06394#6
2308.06394#8
2308.06394
[ "2302.04023" ]
2308.06394#8
Detecting and Preventing Hallucinations in Large Vision Language Models
M-HalDetect : Multi-Modal Hallucination Detection Dataset Dataset Description In this section, we introduce the M-HalDetect dataset that incorporates fine-grained annota- tions for identifying hallucinations in detailed image descrip- tions generated by LVLMs. The dataset comprises of image- description pairs sampled from 4,000 images taken from the val2014 split of the Common Objects in Context (COCO) dataset (Lin et al. 2014). The dataset is divided into a train- ing set with 3,200 images and a development set with 800 images. We choose to utilize the validation set of COCO to avoid potential training data regurgitation from LVLMs trained on the COCO training set. This is roughly 10% of the original COCO validation set, leaving enough data untouched to not impact further validation too heavily.
2308.06394#7
2308.06394#9
2308.06394
[ "2302.04023" ]
2308.06394#9
Detecting and Preventing Hallucinations in Large Vision Language Models
To generate responses, we prompt InstructBLIP (Dai et al. 2023) with each image and a randomly selected question from a pool of instructions for describing an image. We initially reuse instructions from ones used in InstructBLIPâ s detailed image description training data, which were sourced from the LLaVA-150k (Liu et al. 2023b) dataset. During initial analysis, we observed that doing so led to less diverse responses, potentially due to the influence of this dataset during training. To address this, we added in our own prompts to improve generation diversity. An exhaustive list of question prompts is listed in the Appendix. We sample four responses using nucleus sampling from InstructBLIP with a temperature value set to 1.0. This creates 16k image-prompt-response triplets, split between 12800 samples in the train split and 3200 samples in the val split. Dataset Categories The annotation process involves cate- gorizing different segments of each response into three cat- egories: (i) Accurate, (ii) Inaccurate, and (iii) Analysis. We also include an Unsure category for ambiguous cases. We define the classes as follows: â ¢ Accurate Objects exist in the image, their descriptions are accurate according the image, and any described rela- tionships can be accurately inferred from the image. â ¢ Inaccurate Objects do not exist in the image or their descriptions are inaccurate. Furthermore, if the analysis about the image is not plausible, it is also marked as Inaccurate.
2308.06394#8
2308.06394#10
2308.06394
[ "2302.04023" ]
2308.06394#10
Detecting and Preventing Hallucinations in Large Vision Language Models
â ¢ Analysis Scene or object analysis including complex rea- soning or interpretations about the image. These are por- tions of the data that are more subjective and not grounded visually within the image. Unsure This category is reserved as a last resort if annota- tors cannot make a judgment about the sentence segment into one of the above three categories. We provide fine-grained annotations for these 3 categories on the detailed descriptions of images generated by the LVLM. The annotations are provided at sub-sentence level - i.e. one sentence can comprise of multiple segments from different classes, as seen in Figure 1. To make the annotation process user-friendly, we allow a leeway to the annotators to miss a few words in the anno- tations if there are too many segments in a sentence to be annotated. The unmarked words in a sentence are by default considered as â Accurateâ . In our analysis, we noticed that sometime annotators skip annotating punctuation, connector words, or introductory sub-sentences such as â The image featuresâ (illustrated in Figure 1). Dataset Collection To collect the annotations, we em- ployed Scale AIâ s RAPID(sca 2023) labeling tool and in- volved 10 randomly selected human annotators. These an- notators had to qualify by passing a training course with a minimum accuracy of 85% on the example tasks to be se- lected for the final tagging task. The annotators are presented with an image and four responses about the image generated by InstructBLIP. Their task is to annotate segments of the sentence into one the categories. An example annotation task is illustrated in Figure 1. Further details on dataset generation, diverse prompts, and examples can be found in the Appendix. # Method Multi-Modal Reward Model We implement a multi-modal reward model for detecting the presence of hallucinations generated by LVLMs. Specifically, we reuse the InstructBLIP weights and architecture, swap- ping the final embedding layer with a classification head. We do this as initializing the reward model from the genera- tive model weights improves training robustness and reward
2308.06394#9
2308.06394#11
2308.06394
[ "2302.04023" ]
2308.06394#11
Detecting and Preventing Hallucinations in Large Vision Language Models
20000 15000 10000 5000 0.0 0.2 0.4 0.6 08 10 Figure 2: Label density histogram for the Inaccurate class. The x-axis represents the percentage of a sentence that is an- notated as Inaccurate and the y-axis represents the frequency of such sentences in the dataset. generalization in later RL(Zheng et al. 2023). InstructBLIP consists of an image encoder that extracts image features and a linear mapping layer that projects these features. These image feature are passed to an instruction-aware attention layer, the QFormer, that attends instructions over the pro- jected image features. The QFormer outputs are passed to a frozen pretrained decoder as soft prompts, prefixed to the instruction. For this paper, we choose to use Vicuna(vic 2023) as the frozen decoder following the original InstructBLIP. We train reward models at sentence level and sub-sentence level densities. For each image-text pair, we run one forward pass similar to (Lightman et al. 2023), and set target class labels at the token concluding each segment, masking out all other indices in the segment. We optimize with cross entropy loss. We fine-tune the entire decoder and reward model head, while freezing the rest of the model. Ablations on model freezing and further hyperparameters as well as details on training can be found in the Appendix. # Sentence-level Reward Prediction We condense the labeled sub-sentence segments in M- HalDetect into sentence-level segments for a more structured reward format - this makes it more straightforward to run rejection sampling and train with RL, without worrying about localizing proper segments. We identify these sentences using the Natural Language Toolkit(Bird, Klein, and Loper 2009). For each sentence, if there is any segment that is inaccurate, we label the entire sentence as inaccurate. While this may introduce some noise when converting partially inaccurate sentences, we see in Figure 2 that the frequency of such sen- tences is low. Furthermore, if a sentence has a segment with the â unsureâ category, we merge that sentence into the inaccu- rate class.
2308.06394#10
2308.06394#12
2308.06394
[ "2302.04023" ]
2308.06394#12
Detecting and Preventing Hallucinations in Large Vision Language Models
We experiment with two levels of label granularity with this dataset: â ¢ Binary Classification: Condense Analysis and Accurate classes into the Accurate class. In this setting we have two classes: Accurate and Inaccurate â ¢ Ternary Classification: In this setting, we have three classes: Accurate, Inaccurate and Analysis. The dataset distribution is visualized in the Appendix. ACCURATE ACCURATE ANALYSIS Tue label INACCURATE INACCURATE 2064 Predicted label ACCURATE ANALYSIS INACCURATE Predicted label Ternary Classifier Binary Classifier Figure 3: Confusion Matrix comparison between Binary and Ternary Classifiers. The right plot represents the binary clas- sifier labels derived from the ternary classifier by merging the Accurate and Analysis classes. Segment-level Reward Prediction We also train a finer-grained reward model that make hal- lucination judgments on segments of sentences as opposed to entire sentences. This can provide less noisy signal when training on annotations, especially with longer compound sentences and hallucinations isolated to small portions of a sentence. We train on this data in a similar fashion to the sentence level rewards, by labeling the end token index of each span or segment of annotated text into its corresponding label. We then mask out every other index in the sequence. As a baseline, we assume perfect localization of the anno- tation segments as an upper bound for the performance of this method. Future works can consider training a segment localization model in parallel with the reward model, to de- tect when hallucinations start and end. Since we do not do this, we cannot use this reward model for rejection sampling, and evaluate purely on classification metrics over the test set. Similar to sentence-level reward prediction baselines, we also experiment with the binary and ternary variants of the segment-level reward prediction models. Rejection Sampling We use the trained reward models to perform rejection sam- pling on the generations of InstructBLIP to promote selection of less hallucinatory responses. We do this on the passage level, computing reward scores for the whole generation at once. We calculate the reward score by averaging the non- hallucination log probabilities of each sentence. This repre- sents the normalized log probability of the entire passage containing no hallucinations.
2308.06394#11
2308.06394#13
2308.06394
[ "2302.04023" ]
2308.06394#13
Detecting and Preventing Hallucinations in Large Vision Language Models
We compute rejection sampling in a best-of-n and worst-of-n setting, for n = 16, 64, to study the ability of the reward model in selecting the best genera- tions from InstructBLIP, and the variance in quality between generations. As we train two types of sentence level reward models (bi- nary and ternary, including the analysis class), we experiment with using both models for reward scoring. We found in our initial experiments that although the binary reward model is able to penalize hallucinations with low scores, it tends to give very high scores towards the analysis class. We theorize that it is much easier to detect non-hallucinogenic analysis Reward Model Fine-grained Predictions Prompt: <image>
2308.06394#12
2308.06394#14
2308.06394
[ "2302.04023" ]
2308.06394#14
Detecting and Preventing Hallucinations in Large Vision Language Models
Deseribe the following image. Most Preferred Response Rating 1.041 ACCURATE 0.995 The image depicts a person riding a green motorcycle on a race track. ACCURATE 0.57 â The motorcycles leaning sharply as the rider takes a turn, making for an exciting and thriling scene. ACCURATE 0.6123 Therider is wearing a helmet and gloves, ensuring their safety during the high-speed maneuver. INACCURATE 0.0266 In the background, there are several other motorcycles visible, some of which are closer to the viewer, while others are further away. J) INACCURATE 0.2382 A grassy area can also be seen in the background, providing a natural setting for the motorcycle race. ANALYSIS 0.8822 Overall, the image captures the excitement and adrenaline of a competitive motorcycle race, Least Preferred Response Rating 2.771 ACCURATE. 0,9946 The image features a person riding a green motorcycle on a race track. INACCURATE 0.4622 The motorcycle is leaning to the left, and the person is wearing a helmet, gloves, and a backpack. ACCURATE 0.517 The motorcycle is positioned towards the right side of the image, and the person appears to be in the middle of a turn. INACCURATE 0.0143. There are two other motoraycies visible in the scene, one closer to the left side and the other closer to the right side of the image. INACCURATE 0.00735 These additional motorcycles add to the excitement of the race. INACCURATE 0.00241 in adcition to the motorcycles, there are several chairs scattered throughout the scene, possibly belonging to spectators or crew members.
2308.06394#13
2308.06394#15
2308.06394
[ "2302.04023" ]
2308.06394#15
Detecting and Preventing Hallucinations in Large Vision Language Models
Figure 4: Rejection sampling examples using the ternary reward model. Scores for sampled responses are computed using as the average negative logprob per sentence of a hallucination. over factual descriptions, and as a result the binary reward model scores are biased towards generations that contain more subjective analysis rather than objective descriptions. This is less of a problem with the ternary reward model, as analysis has been split into its own class. As we will discuss in the results, the ternary modelâ s functionaltiy is a superset of the binary model. For these reasons, we choose to use the the ternary reward model for rejection sampling moving forward. To study our the robustness of our reward model and our dataset, we conduct rejection sampling on generations from other LVLMs, namely LLaVA and mPLUG-OWL. For these experiments, we reuse the reward model initialized from InstructBLIP.
2308.06394#14
2308.06394#16
2308.06394
[ "2302.04023" ]
2308.06394#16
Detecting and Preventing Hallucinations in Large Vision Language Models
LFDPO (Ï Î¸; Ï ref ) = â E(x,y,c)â ¼D [log Ï (βk)] k = c = 0 â r c = 1 r â â c > 1 , r = log Ï Î¸ (y | x) Ï ref (y | x) with sample segments x, y, c being drawn from the dataset. Here, x is the entire input up until the start of the current segment, y is the generated segment, and c is the class of the current segment, with c = 1 being the preferred class, c = 0 being the dispreferred class, and all other classes being ignored. Since segments are non-overlapping, we can run a single forward pass for each sample to calculate the loss of all segments within the sample all at once.
2308.06394#15
2308.06394#17
2308.06394
[ "2302.04023" ]
2308.06394#17
Detecting and Preventing Hallucinations in Large Vision Language Models
Fine-grained Direct Preference Optimization While we train a reward model to show the potential of op- timizing against hallucinations with RL, we also directly optimize InstructBLIP using FDPO to reduce hallucinations. Since M-HalDetect does not contain the traditional pref- erence pairs used in DPO and RLHF, we explicitly segment each generation into sequences of preferred, dispreferred, and neutral chunks. We then reuse the DPO loss in increas- ing the likelihoods of preferred chunks while decreasing the likelihood of dispreferred chunks, each regularized by the original likelihood from the base model for the correspond- ing chunk, while neutral chunks are ignored. Similar to (Wu et al. 2023), this should give stronger signal during training in reducing hallucinatory generations as compared to using pairs of likelihoods over entire generations. Recall the loss used in DPO, with Ï ref as the reference model, Ï Î¸ as the policy model, x being the input, yw be- ing the preferred generation, and yl being the dispreferred generation. This formulation allows us to categorize each class into positive, negative, or neutral signal, the latter of which will be ignored during training. We run ablations on including the analysis class as either a negative or neutral class when optimizing InstructBLIP with FDPO. We fine-tune only the QFormer and language head, keeping the rest of the model frozen. We use β = 0.5 for all our FDPO experiments, and train for a maximum of 5 epochs with lr = 10â 6, warmup ratio of .03, and a cosine scheduler. Ablations on model freezing can be found in the Appendix.
2308.06394#16
2308.06394#18
2308.06394
[ "2302.04023" ]
2308.06394#18
Detecting and Preventing Hallucinations in Large Vision Language Models
Evaluation Recent works in multi-modal LLMs(Liu et al. 2023b,a) some- times use GPT-4 as a human proxy to qualitatively evaluate LM outputs. Specifically, GPT-4 is prompted to give a pref- erence score to a LM generation, either as a stand-alone or compared against GPT-4â s own generation. This metric enables automatic evaluation without depending on human evaluators. 779 (Yw | 3 Tret) = â E(a, wD |l log â â ~â â â
2308.06394#17
2308.06394#19
2308.06394
[ "2302.04023" ]
2308.06394#19
Detecting and Preventing Hallucinations in Large Vision Language Models
Lypo (79; Tret) (2. sy)~D [08 (« 8 Qu la ~ og 2212) w Tret (yt | 2) Since we donâ t have preferences over pairs of generations, but spans of fine-grained preferences throughout each gener- ation, our FDPO loss can be modeled as However, this is plagued with systematic bias such as senstitivity to the ordering of responses (Wang et al. 2023). Furthermore, GPT-4â
2308.06394#18
2308.06394#20
2308.06394
[ "2302.04023" ]
2308.06394#20
Detecting and Preventing Hallucinations in Large Vision Language Models
s public API does not yet support image inputs. Recent multi-modal works instead pass image context in the form of captions and object bounding boxes. In several cases, this symbolic input cannot represent the image robustly and leads to incorrect evaluations. We performed a qualitative analysis on GPT-4â s performance on LLaVA-150kâ s detail subset and noted that GPT-4 gave frequent inaccurate scores Model Type Method InstructBLIP Baseline Baseline (T=0) 0.97 0.71 InstructBLIP InstructBLIP InstructBLIP InstructBLIP DPO DPO DPO DPO IA Finetune Qformer (T=0) IA Finetune Qformer (T=1) DA Finetune Qformer (T=0) DA Finetune Qformer (T=1) 0.48 0.72 0.85 1.03 0.83 0.75 0.70 0.58 InstructBLIP InstructBLIP InstructBLIP RS RS RS Best of 64 Worst of 64 Best of 16 0.26 1.76 0.36 0.87 0.53 0.82 LLaVA LLaVA Baseline RS Baseline (T=0) Best of 16 0.383 0.159 0.805 0.834 mPLUG-OWL Baseline mPLUG-OWL RS Baseline (T=0) Best of 16 1.26 0.595 0.476 0.707 Table 1: Results of reward model and human evaluation scores. The RM score is the average negative log probability of the passage not containing hallucinations, while the human evaluation score is the percentage of content that was truthful. A perfect RM score would be 0, and a perfect human evaluation score would be 1. and explanations, failing to detect hallucinations while incor- rectly penalizing correct generations. For this reason, we do not use GPT-4 for automatic evaluation of generation quality. To combat these limitations, we use human evaluation to evaluate the hallucination rates of our rejection sampling and DPO generations. Following the same labeling instructions as the M-HalDetect, we annotate the generations into accurate, inaccurate, and analysis spans. For generations from our DPO model, we use temperature=1 and nucleus sampling.
2308.06394#19
2308.06394#21
2308.06394
[ "2302.04023" ]
2308.06394#21
Detecting and Preventing Hallucinations in Large Vision Language Models
We apply this across 50 different images sourced from COCOâ s validation set, separate from the ones used in M-HalDetect, though we reuse instructions from the dataset. A common trade-off between reducing hallucinations is a reduction in helpfulness. Consider, for example, a model that outputs nothing - it does not hallucinate, yet it is not helpful either. To avoid this potential bias in our evaluation, we choose to measure the hallucination rate as the number of inaccurate words divided by the number of total words, excluding analysis segments, to calculate what percentage of descriptive objective content contained hallucinations. level and segment-level) using the development split of the M-HalDetect Dataset. We report Accuracy and F-1 Score for each of the training strategies. All models are initialized with pre-trained InstructBLIP weights, and the results are reported in Table 2. Although the binary version has higher accuracy and F1 than the ternary in both sentence and segment level appli- cations, we see in Figure 3 that the ternary reward model actually performs about the same as the binary reward model, if we were to reduce from a ternary to a binary setting. The ternary model additionally learns to separate the Accurate and Analysis classes, and we use it for rejection sampling and reward scoring experiments moving forward. Human Evaluation Figure 4 illustrates an example of rejection sampling using fine-grained feedback from the reward model. The reward model is able to accurately flag hallucinatory sentences which incorrectly claims the presence of other motorcycles and chairs. Furthermore, it is also able to flag sentences that generate analysis about non-existent objects.
2308.06394#20
2308.06394#22
2308.06394
[ "2302.04023" ]
2308.06394#22
Detecting and Preventing Hallucinations in Large Vision Language Models
# Results # Rewared Model Classification Metrics Type Density Accuracy F1 Score Binary Ternary Sentence Level Sentence Level 79.2 71.4 78.37 70.8 Binary Ternary Segment Level Segment Level 83.92 77.2 83.22 76.93 # Table 2: Baseline Reward Model Results We evaluate the multi-modal reward models (sentence- We observe in Table 1 that rejection sampling significantly improves the factual rate of InstructBLIPâ s outputs. On the other hand, the worst generations of InstructBLIP can be ex- tremely poor, with an almost 50% hallucination rate! We can see from both the human eval results and our reward model scores in Figure 6 that we get exponentially diminishing returns as the sample size increases. Rejection Sampling We also see that rejection sampling with InstructBLIP manages to reduce hallucination rates for LLaVA and significantly for mPLUG-OWL. This shows that although M-HalDetectâ s image descriptions are sourced from InstructBLIP, they can still be used successfully in evaluat- ing and improving on other LVLMs. It is interesting to see LLaVAâ s baseline model performing so strongly - we suspect this is because LLaVA is trained specifically for generating 1.0 0.8 0.6 0.4 Accuracy Content 0.2 0.0 0.5 1.0 15 2.0 2.5 3.0 Reward Model Score Figure 5: Human evaluation scores against reward scores for all human evaluated results. Reward Model Score 124° 8 16 2 64 N = number of generations per sample Figure 6: Reward model score means and variances as n increases in best-of-n rejection sampling. We see diminishing returns as we increase n.
2308.06394#21
2308.06394#23
2308.06394
[ "2302.04023" ]
2308.06394#23
Detecting and Preventing Hallucinations in Large Vision Language Models
detailed descriptions, whereas InstructBLIP and mPLUG- OWL are more general models with a wide range of task applicability. Additionally, we study the correlation between reward model and human evaluation scores. In Figure 5, we see that across all human evaluated results, there is a clear and strong correlation between our reward model scores and human accuracy scores. Although this is by no means a robust re- placement for human annotations, this shows the potential of training models as specific evaluators for hallucinations. Despite the noisiness, such a model could be used for early hyper-parameter selection, being much more cost effective than humans evaluation. Fine-Grained DPO We evaluate two variations of FDPO across the three classes - one that ignores analysis (IA), and one that disprefers analysis (DA), merging it with the inac- curate class. We see in Table 1 that marking analysis as a negative class does not impact hallucination rates in a signifi- cant way when training with FDPO, and may actually worsen rates at higher temperatures. We suspect that this may be be- cause InstructBLIPâ
2308.06394#22
2308.06394#24
2308.06394
[ "2302.04023" ]
2308.06394#24
Detecting and Preventing Hallucinations in Large Vision Language Models
s generations often have the last sentence being subjective analysis of the image, followed by an end of sequence token. Pushing down the likelihoods of generating this sentence increases the likelihood of the generation being lengthened, potentially inducing additional hallucinations as the model runs out of accurate content to describe. On the other hand, we see that ignoring analysis in FDPO training almost cuts hallucination rates in half. Even sampling at high temperature, generations still on average contain less hallucinations than the baseline InstructBLIP model sampled at 0 temperature, where it would have the least propensity to hallucinate. This is slightly better than best-of-16 rejection sampling, and almost as good as best-of-64 rejection sam- pling. This performance gap is to be expected as rejection sampling can generalize over the entire set of possible model generations, whereas FDPO is more limited in optimizing only over the data that it sees in the training data. Though, there is a trade-off in this performance, as best-of-n rejection sampling is slower in inference by a factor of n. Conclusion We introduce M-HalDetect, a novel multi-modal fine-grained hallucination detection dataset for benchmarking and training LVLMs to produce more truthful generations. We train fine- grained multi-modal reward models to perform rejection sam- pling against InstructBLIP. We innovate FDPO to optimize InstructBLIP directly on M-HalDetect, avoiding the need for preference pairs. Both methods significantly reduce Instruct- BLIPâ s hallucination rate, extending their effectiveness to the multi-modal domain, and demonstrating the usefulness of M-HalDetect in catching and reducing hallucinations. We show this dataset is generalizable across multiple LVLMs, successfully reducing the hallucination rates of LLaVA and mPLUG-OWL. While we show strong performance with rejection sam- pling, it is prohibitively slow for inference in real-world use- cases. The next step would be to optimize a generative model, perhaps InstructBLIP, using reinforcement learning with our trained reward models to create a higher quality LVLM for instruction aware VQA. A limitation of modern day applications towards training large models with fine-grained feedback is that training typi- cally takes place over multiple iterations of model training and feedback collection. This ensures the final model is more robustly aligned with the high level training objective. In this paper, we only perform one cycle of collecting response feedback and training.
2308.06394#23
2308.06394#25
2308.06394
[ "2302.04023" ]
2308.06394#25
Detecting and Preventing Hallucinations in Large Vision Language Models
Indeed, when analyzing some of the responses, we can see hints of overfitting to our training ob- jective - image descriptions are slightly more generic than before, and the preciseness of descriptions may have gone down. Future work can extend our dataset and methods to also account for descriptiveness and informativeness, training multiple reward models for optimizing a more robust final model. References 2023. Scale AI Rapid Portal. https://scale.com/docs/how- rapid-works. Accessed: 2023-07-23. 2023. Vicuna. https://github.com/lm-sys/FastChat. Accessed: 2023-07-23. Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022.
2308.06394#24
2308.06394#26
2308.06394
[ "2302.04023" ]
2308.06394#26
Detecting and Preventing Hallucinations in Large Vision Language Models
Flamingo: a visual language model for few- shot learning. Advances in Neural Information Processing Systems, 35: 23716â 23736. Bai, Y.; Jones, A.; Ndousse, K.; Askell, A.; Chen, A.; Das- Sarma, N.; Drain, D.; Fort, S.; Ganguli, D.; Henighan, T.; et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Bang, Y.; Cahyawijaya, S.; Lee, N.; Dai, W.; Su, D.; Wilie, B.; Lovenia, H.; Ji, Z.; Yu, T.; Chung, W.; Do, Q. V.; Xu, Y.; and Fung, P. 2023.
2308.06394#25
2308.06394#27
2308.06394
[ "2302.04023" ]
2308.06394#27
Detecting and Preventing Hallucinations in Large Vision Language Models
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. arXiv:2302.04023. Bird, S.; Klein, E.; and Loper, E. 2009. Natural language pro- cessing with Python: analyzing text with the natural language toolkit. â Oâ Reilly Media, Inc.â . Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020.
2308.06394#26
2308.06394#28
2308.06394
[ "2302.04023" ]
2308.06394#28
Detecting and Preventing Hallucinations in Large Vision Language Models
Language Models are Few-Shot Learners. Advances in Neural Information Processing Sys- tems, 33: 1877â 1901. Dai, W.; Li, J.; Li, D.; Tiong, A. M. H.; Zhao, J.; Wang, W.; Li, B.; Fung, P.; and Hoi, S. 2023. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. arXiv:2305.06500. Engstrom, L.; Ilyas, A.; Santurkar, S.; Tsipras, D.; Janoos, F.; Rudolph, L.; and Madry, A. 2020. Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO. CoRR, abs/2005.12729. Ganesan, K. 2018. ROUGE 2.0: Updated and Im- proved Measures for Evaluation of Summarization Tasks. arXiv:1803.01937. Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y. J.; Madotto, A.; and Fung, P. 2023.
2308.06394#27
2308.06394#29
2308.06394
[ "2302.04023" ]
2308.06394#29
Detecting and Preventing Hallucinations in Large Vision Language Models
Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12): 1â 38. Li, C. 2023. Large Multimodal Models: Notes on CVPR 2023 Tutorial. arXiv preprint arXiv:2306.14895. Li, Y.; Du, Y.; Zhou, K.; Wang, J.; Zhao, W. X.; and Wen, J.-R. 2023. Evaluating object hallucination in large vision- language models. arXiv preprint arXiv:2305.10355. Lightman, H.; Kosaraju, V.; Burda, Y.; Edwards, H.; Baker, B.; Lee, T.; Leike, J.; Schulman, J.; Sutskever, I.; and Cobbe, K. 2023.
2308.06394#28
2308.06394#30
2308.06394
[ "2302.04023" ]
2308.06394#30
Detecting and Preventing Hallucinations in Large Vision Language Models
Letâ s Verify Step by Step. arXiv preprint arXiv:2305.20050. Lin, T.; Maire, M.; Belongie, S. J.; Bourdev, L. D.; Girshick, R. B.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. CoRR, abs/1405.0312. Liu, F.; Lin, K.; Li, L.; Wang, J.; Yacoob, Y.; and Wang, L. 2023a. Aligning Large Multi-Modal Model with Robust Instruction Tuning. arXiv preprint arXiv:2306.14565. Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023b. Visual instruc- tion tuning. arXiv preprint arXiv:2304.08485. Nakano, R.; Hilton, J.; Balaji, S.; Wu, J.; Ouyang, L.; Kim, C.; Hesse, C.; Jain, S.; Kosaraju, V.; Saunders, W.; Jiang, X.; Cobbe, K.; Eloundou, T.; Krueger, G.; Button, K.; Knight, M.; Chess, B.; and Schulman, J. 2021. WebGPT: Browser- assisted question-answering with human feedback.
2308.06394#29
2308.06394#31
2308.06394
[ "2302.04023" ]
2308.06394#31
Detecting and Preventing Hallucinations in Large Vision Language Models
CoRR, abs/2112.09332. OpenAI. 2023. GPT-4 Technical Report. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â
2308.06394#30
2308.06394#32
2308.06394
[ "2302.04023" ]
2308.06394#32
Detecting and Preventing Hallucinations in Large Vision Language Models
27744. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a Method for Automatic Evaluation of Machine Trans- lation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311â 318. Associ- ation for Computational Linguistics. Rafailov, R.; Sharma, A.; Mitchell, E.; Ermon, S.; Manning, C. D.; and Finn, C. 2023.
2308.06394#31
2308.06394#33
2308.06394
[ "2302.04023" ]
2308.06394#33
Detecting and Preventing Hallucinations in Large Vision Language Models
Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal Policy Optimization Algorithms. CoRR, abs/1707.06347. Stiennon, N.; Ouyang, L.; Wu, J.; Ziegler, D. M.; Lowe, R.; Voss, C.; Radford, A.; Amodei, D.; and Christiano, P. F. 2020.
2308.06394#32
2308.06394#34
2308.06394
[ "2302.04023" ]
2308.06394#34
Detecting and Preventing Hallucinations in Large Vision Language Models
Learning to summarize from human feedback. CoRR, abs/2009.01325. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; Bikel, D.; Blecher, L.; Ferrer, C. C.; Chen, M.; Cucurull, G.; Esiobu, D.; Fernandes, J.; Fu, J.; Fu, W.; Fuller, B.; Gao, C.; Goswami, V.; Goyal, N.; Hartshorn, A.; Hosseini, S.; Hou, R.; Inan, H.; Kardas, M.; Kerkez, V.; Khabsa, M.; Kloumann, I.; Korenev, A.; Koura, P. S.; Lachaux, M.-A.; Lavril, T.; Lee, J.; Liskovich, D.; Lu, Y.; Mao, Y.; Martinet, X.; Mihaylov, T.; Mishra, P.; Molybog, I.; Nie, Y.; Poulton, A.; Reizenstein, J.; Rungta, R.; Saladi, K.; Schelten, A.; Silva, R.; Smith, E. M.; Subramanian, R.; Tan, X. E.; Tang, B.; Taylor, R.; Williams, A.; Kuan, J. X.; Xu, P.; Yan, Z.; Zarov, I.; Zhang, Y.; Fan, A.; Kambadur, M.; Narang, S.; Rodriguez, A.; Stojnic, R.; Edunov, S.; and Scialom, T. 2023.
2308.06394#33
2308.06394#35
2308.06394
[ "2302.04023" ]
2308.06394#35
Detecting and Preventing Hallucinations in Large Vision Language Models
Llama 2: Open Foundation and Fine-Tuned Chat Models. Wang, P.; Li, L.; Chen, L.; Zhu, D.; Lin, B.; Cao, Y.; Liu, Q.; Liu, T.; and Sui, Z. 2023. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Wei, J.; Bosma, M.; Zhao, V. Y.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2021.
2308.06394#34
2308.06394#36
2308.06394
[ "2302.04023" ]
2308.06394#36
Detecting and Preventing Hallucinations in Large Vision Language Models
Fine- tuned Language Models Are Zero-Shot Learners. CoRR, abs/2109.01652. Wu, Z.; Hu, Y.; Shi, W.; Dziri, N.; Suhr, A.; Ammanabrolu, P.; Smith, N. A.; Ostendorf, M.; and Hajishirzi, H. 2023. Fine-Grained Human Feedback Gives Better Rewards for Language Model Training. arXiv preprint arXiv:2306.01693. Ye, Q.; Xu, H.; Xu, G.; Ye, J.; Yan, M.; Zhou, Y.; Wang, J.; Hu, A.; Shi, P.; Shi, Y.; Li, C.; Xu, Y.; Chen, H.; Tian, J.; Qi, Q.; Zhang, J.; and Huang, F. 2023. mPLUG-Owl: Modulariza- tion Empowers Large Language Models with Multimodality. Zheng, R.; Dou, S.; Gao, S.; Hua, Y.; Shen, W.; Wang, B.; Liu, Y.; Jin, S.; Liu, Q.; Zhou, Y.; Xiong, L.; Chen, L.; Xi, Z.; Xu, N.; Lai, W.; Zhu, M.; Chang, C.; Yin, Z.; Weng, R.; Cheng, W.; Huang, H.; Sun, T.; Yan, H.; Gui, T.; Zhang, Q.; Qiu, X.; and Huang, X. 2023. Secrets of RLHF in Large Language Models Part I: PPO. arXiv:2307.04964. Ziegler, D. M.; Stiennon, N.; Wu, J.; Brown, T. B.; Radford, A.; Amodei, D.; Christiano, P. F.; and Irving, G. 2019. Fine- Tuning Language Models from Human Preferences. CoRR, abs/1909.08593. # Data Annotation Annotation Portal We use Scale AIâ s RAPID Annotation Portal (sca 2023). The annotators are provided with an image, question, and LM-generated detailed description of the image. For each sentence, the annotators mark parts of the sentence into ap- propriate cateogies: Accurate, Inaccurate, Analysis, Unsure.
2308.06394#35
2308.06394#37
2308.06394
[ "2302.04023" ]
2308.06394#37
Detecting and Preventing Hallucinations in Large Vision Language Models
This is illustrated in Figure 8 Annotation Examples We present some examples from the M-HalDetect dataset in Figure 7. Class-wise density distribution For each sentence in the dataset (train split), we compute densities in the form of number of words in each sentence annotated into each of the three classes. This is illustrated in a histogram at Figure 10, where the x-axis represents the class presence within the sentence and the y-axis represents represents the number of sentences. We see that of the three classes, the Accurate classâ s densi- ties are the least polar, while the Inaccurate classâ s densities are the most polar with some slight bias towards lower densi- ties. This indicates that the sentences with inaccuracies are either fully inaccurate, or contain just a few words that are inaccurate. This matches up with the Accurate classâ s slight bias towards higher densities, implying that most mixed-label sentences with inaccuracies tend to comprise of inaccurate and accurate material, not analysis. As there is a high concentration of sentences that are fully categorized into one of the classes, we consider using sen- tence level representation of annotations as one of the reward model baselines.
2308.06394#36
2308.06394#38
2308.06394
[ "2302.04023" ]
2308.06394#38
Detecting and Preventing Hallucinations in Large Vision Language Models
More details on the generation are deferred to Section . Researcher Agreement Figure 13 illustrates the class-level analysis of researcher agreement concerning the annotation task. Differing from human agreement, this assessment was conducted by two authors of the paper who possess expertise in the field of Natural Language Processing (NLP) and a comprehensive understanding of the practical use of the trained reward mod- els derived from this dataset. The study involves a comparison of independent annotations provided by two researchers for a consistent set of 10 images. Due to fine-grained nature of the annotation, there are some disagreements or subjectivity for annotating individual words, especially between the accurate and inaccurate classes. We performed qualitative analysis on the disagreements between the researchers or annotators and found a pattern in labelling differences rooting mostly between the classes (i) Accurate and Analysis, and (ii) Accurate and Inaccurate. The different interpretations of the image is attributed mainly to the subjectivity of this task or ambiguity in the descriptions. In addition, disparities in annotation can emerge when a single attribute of a phrase is incorrect. In such instances, some annotators might opt to flag the specific attribute as erroneous, while others could decide to label the entire phrase as incorrect. Attributing to these challenges in the dataset and the subjectivity of this task, we can expect a reward model trained on this dataset to have a ceiling classification performance around the same range.
2308.06394#37
2308.06394#39
2308.06394
[ "2302.04023" ]
2308.06394#39
Detecting and Preventing Hallucinations in Large Vision Language Models
# Training Details Model Freezing Ablations Reward Model We explore freezing different parts of the reward model during training, and report results in Table 3. â ¢ Finetune Decoder: The entire LLM Decoder and the Reward Model head is finetuned. â ¢ FT-Decoder 3 layers: The last 3 layers of LLM Decoder and the Reward Model head is finetuned, while keeping everything else frozen. â ¢ FT-Decoder 1 layer: The final layer of LLM Decoder and the Reward Model head is finetuned, while keeping everything else frozen.
2308.06394#38
2308.06394#40
2308.06394
[ "2302.04023" ]
2308.06394#40
Detecting and Preventing Hallucinations in Large Vision Language Models
â ¢ Finetune Qformer: The InstructBLIP Qformer is fine- tuned along with the Reward Model head, while the de- coder is kept frozen. We initially explored only fine-tuning the reward head while keeping the entire model frozen, but found a significant drop in performance compared to all other methods of around 20% in both accuracy and F1 so we do not include it in our main results. This can be considered the performance baseline of the reward model. We see that for Binary Classification, the fine- tuned decoder outperforms or is at par with all the other baselines. However, the performance gap between the fully fine-tuned decoder and partially fine-tuned models is not very significant. A similar trend is seen for Ternary Classification, but we observe a significant drop in performance for the finetuned Qformer. We theorize that this may be caused by fine-tuning a random initialized classifi- cation head at the end of the model at the same time as the QFormer present towards the start of the model. Improve- ments can be made on this to fine-tune the classification head first, before fine-tuning the QFormer, but we leave that to future work due to resource constraints. DPO While always fine-tuning the final unembedding layer of the decoder, we also study the effects of fine-tuning just the QFormer and fine-tuning just the top 3 layers of the decoder. We also explored fine-tuning the entire decoder in FDPO, but early experiments showed a heavy propensity towards overfitting and instable training, so we chose not to pursue that option. Training Hyperparameters We train all models for 10 epochs with a batch size of 16 per device with a learning rate of 2e-5. The training is done with fsdp full shard auto wrap mode.
2308.06394#39
2308.06394#41
2308.06394
[ "2302.04023" ]
2308.06394#41
Detecting and Preventing Hallucinations in Large Vision Language Models
Binary Classification Training Logs In this experiment, the classifier predicts a sentence into one of two classes: Accurate and Inaccurate. Training logs are shown in Figures 11, 12 . All models are trained for 10 The image depicts a man riding a skateboard down a paved path in a park. He is wearing a black jacket and appears to be enjoying himself as he glides along the path. There are several trees visible in the background, providing a serene setting for the skateboarder's leisurely ride. In addition to the skateboarder, there are several other people scattered throughout the scene. Some are walking on the path, while others are sitting or standing near the trees. Overall, the image captures a peaceful moment in the park where people are enjoying the outdoors and engaging in various activities.
2308.06394#40
2308.06394#42
2308.06394
[ "2302.04023" ]
2308.06394#42
Detecting and Preventing Hallucinations in Large Vision Language Models
The image features a large white van parked inside a garage or workshop. The van is covered in newspapers and appears to be undergoing some sort of refurbishment or repainting process. There are several newspaper clippings on the van's surface, giving it a unique appearance. Inside the garage, there is a person working on the van, possibly involved in the refurbishment or repainting process. Additionally, there is another person standing near the van, observing the work being done. Figure 7: Example Annotations Inaccurate Accurate Analysis Unsure Open attachment link in new window Image: COCO_val2014_000000516856.jpg Question: <image> Compose a comprehensive description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any individuals. In addition, depict the activities or interactions happening within the scene and provide an overall analysis of the composition, atmosphere, and narrative conveyed. Response 1: st The image features a young man sitting at a dining table, eating a sandwich. He is wearing a gray shirt and appears to be enjoying his meal. The table is cluttered with various objects, including plates, bowls, cups, and bottles. Accurate Analysis There are also candles on the table, adding a warm and cozy atmosphere to the scene.
2308.06394#41
2308.06394#43
2308.06394
[ "2302.04023" ]
2308.06394#43
Detecting and Preventing Hallucinations in Large Vision Language Models
Instructions I Shortcuts / SUBMIT opt | A | Figure 8: Scale AI RAPID Portal used for annotation. Type Method Sentence-level Prediction Accuracy F1 Score Accuracy F1 Score Binary Binary Binary Binary Finetune Decoder FT-Decoder 3 layers FT-Decoder 1 layer Finetune Qformer 79.2 78.5 78.41 79.22 78.37 76.91 77.4 78.22 83.92 83.73 83.08 83.41 83.22 82.61 81.65 81.61 Ternary Finetune Decoder Ternary FT-Decoder 3 layers Ternary FT-Decoder 1 layer Ternary Finetune Qformer 71.4 70.41 70.25 68.8 70.8 69.64 70.02 62.7 77.2 76.29 76.08 74.7 76.93 75.37 75.46 73.97 # Segment-level Prediction Table 3: Baseline Reward Model Results: We compare the development set performance of sentence-level and segment-level reward prediction models. Comparison is done with Accuracy and F-1 score across binary and ternary label granularities. Binary Class Distribution Temary Class Distribution Four Class Distribution 4000 4000 ° ° ° "ACCURATE THACCURATE â ACCURATE ACCURATE ANALYSIS ACCURATE RACCURATE ANALYSIS UNSURE Datasize Full Dataset Half Dataset Quarter Dataset Accuracy 0.7489 0.7474 0.7375 F1 Score 0.7414 0.7387 0.7144
2308.06394#42
2308.06394#44
2308.06394
[ "2302.04023" ]
2308.06394#44
Detecting and Preventing Hallucinations in Large Vision Language Models
Figure 9: Class-wise Label Distribution Table 4: Dataset Scaling: Increasing the dataset size for re- ward model training gives a performance boost as size in- creases from a quarter to half but saturates thereafter. epochs. Fine-tuning the entire decoder model (orange) leads to over-fitting as compared to fine-tuning only last few layers of the decoder. Freezing the entire decoder and fine-tuning only the reward model head has the lowest performance. by the authors. Question Prompts are passed to InstructBLIP (Dai et al. 2023) to sample responses.
2308.06394#43
2308.06394#45
2308.06394
[ "2302.04023" ]
2308.06394#45
Detecting and Preventing Hallucinations in Large Vision Language Models
Ternary Classification Training Logs The training curves and evaluation metrics for training ternary reward model classifiers is shown in Figures 14, 15. In this experiment, the classifier predicts a sentence into one of three classes: Accurate, Inaccurate, and Analysis. Data Scaling Analysis To study the effects of data scaling on the performance of our reward model, we ablate the amount of training data used, comparing the differences in validation accuracy and F-1. We perform this analysis on the reward model that fine-tunes last 3 layers of the InstructBLIPâ
2308.06394#44
2308.06394#46
2308.06394
[ "2302.04023" ]
2308.06394#46
Detecting and Preventing Hallucinations in Large Vision Language Models
s LM decoder. Table 4 shows that as the dataset size for reward model training is gradually increased from a quarter to half, the performance of the model experiences over 2 percent increase in itâ s F1-Score. However, beyond the half dataset size, further increments in data do not lead to substantial performance improvements and begins to saturate. Question prompts for dataset generation Figure 16 lists the description generation-related question prompts that we use for generating data. We generate data with two sets of questions. The first set is derived from the data generation method used in LLaVA dataset (Liu et al. 2023b). The second set is a custom list of questions drafted Accurate Inaccurate Analysis 20000 7 40000 35000 30000 15000 4 30000 25000 10000 +4 20000 azov0d 15000 5000 7 10000 10000 5000 Lo) Le) i?) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 08 LO Figure 10: Class-wise Label Density histogram: We show the fine-grained dataset label distribution by computing the percentage of a sentence annotated into each category. The x-axis represents the percentage of sentence that is annotated into a class (Accurate/Inaccurate/Analysis) and the y-axis represents the frequency of such sentences in the dataset. eval/accuracy eval/fl_score â finetune_qformer_sentence â finetune_gformer_sentence â ¢ partial_finetune_!_layer_sentence â ¢ partial_finetune_1_layer_sentence â decoder_finetune_sentence â decoder_finetune_sentence â ¢ partial_finetune_3_layer_sentence â ¢ partial_finetune_3_layer_sentence 0.79 0.78 0.78 0.775 0.77 0.77 0.765 0.76 0.76 _ train/global_step 0.755 train/global_step 200 400 600 3800 1k 200 400 600 300 1k eval/loss train/loss â finetune_qformer_sentence â finetune_aformer_sentence = partial_finetune_t_layer_sentence = partial_finetune_1_layer_sentence â
2308.06394#45
2308.06394#47
2308.06394
[ "2302.04023" ]
2308.06394#47
Detecting and Preventing Hallucinations in Large Vision Language Models
decoder_finetune_sentence â decoder_finetune_sentence â â ¢ partial_finetune_3_layer_sentence â partial_finetune_3_layer_sentence LS o6 0.5 1 0.4 0.3 0.5 0.2 0.1 train/global_step 0 0 200 400 600 800 1k 200 400 600 800 1k Figure 11: Binary Classification: Sentence-level model Train- ing and Evaluation loss, Evaluation F-1 Score and Accuracy. eval/accuracy eval/fil_score â
2308.06394#46
2308.06394#48
2308.06394
[ "2302.04023" ]
2308.06394#48
Detecting and Preventing Hallucinations in Large Vision Language Models
â ¢ partial_decoder_1l_layer_segment_4k Segment_Tokenizea â ¢ partial_decoder_1_layer_segment_4k Segment_Tokenizea â partial_finetune_decoder_3_layers_segment_4k Segment â partial_finetune_decoder_3_layers_segment_4k Segment â finetune_decoder_segment_4k Segment_Tokenized â finetune_decoder_segment_4k Segment_Tokenized â finetune aformer segment 4k Segment Tokenized â finetune aformer segment 4k Segment Tokenized 0.83 0.835 0.83 0.82 0.825 0.81 0.82 0.815 0.8 train/global_step train/global_step 200 400 600 800 1k 200 400 600 800 1k train/loss eval/loss â ¢ partial_decoder_1_layer_segment_4k Segment_Tokenized â partial_decoder_1_layer_segment_4k Segment_Tokenizea â
2308.06394#47
2308.06394#49
2308.06394
[ "2302.04023" ]
2308.06394#49
Detecting and Preventing Hallucinations in Large Vision Language Models
â ¢ partial_finetune_decoder_3_layers_segment_4k Segment. â â ¢ partial_finetune_decoder_3_layers_segment_4k Segment â finetune_decoder_segment_4k Segment_Tokenized â finetune_decoder_segment_4k Segment_Tokenized â finetune aformer seement 4k Seement Tokenized â finetune aformer segment 4k Segment Tokenized 1,2 0.6 1 08 04 0.6 0.4 0.2 0.2 train/globatcstep train/global_step 0 i) 200 400 600 800 1k 200 400 600 800 1k
2308.06394#48
2308.06394#50
2308.06394
[ "2302.04023" ]
2308.06394#50
Detecting and Preventing Hallucinations in Large Vision Language Models
Figure 12: Binary Classification: Segment-level model Train- ing and Evaluation loss, Evaluation F-1 Score and Accuracy. True label 1400 ACCURATE 1200 1000 INACCURATE 800 600 ANALYSIS 400 UNSURE 200 0 ACCURATE INACCURATE ANALYSIS UNSURE Predicted label Confusion Matrix for Researcher-Annotator Agreement ACCURATE 1200 1000 INACCURATE 800 600 ANALYSIS 400 UNSURE 200 0 ACCURATE INACCURATE ANALYSIS UNSURE Confusion Matrix for Inter-Researcher Agreement Figure 13: Confusion Matrix for class-wise researcher agreement scores for the M-Haldetect datasetâ s annotation task. eval/accuracy â ternary_sent_finetune_qformer Ternary Sentence â ternary_sent_finetune_decoder Ternary Sentence â â ¢ ternary_sent_partial_ft_t eval/fl_score â ternary_sent_finetune_qformer Ternary Sentence â ternary_sent_finetune_decoder Ternary Sentence layer Ternary Sentence â â ¢ ternary_sent_partial_ft_l_layer Ternary Sentence â ternary sent partial ft 3 lavers Ternary Sentence â ternarv sent oartial ft 3 lavers Ternarv Sentence 200 400 600 800 1k train/loss â ternary_sent_finetune_qformer Ternary Sentence â ternary_sent_finetune_decoder Ternary Sentence â ¢ ternary_sent_partial_ft_!_layer Ternary Sentence ~ ft 3 lavers Ternarv Sentence eval/loss â ternary_sent_finetune_qformer Ternary Sentence â ternary_sent_finetune_decoder Ternary Sentence â ¢ ternary_sent_partial_ft_l_layer Ternary Sentence ternary sent vartial â ternary sent oartial ft 3 lavers Ternarv Sentence 1 2.5 â â â
2308.06394#49
2308.06394#51
2308.06394
[ "2302.04023" ]
2308.06394#51
Detecting and Preventing Hallucinations in Large Vision Language Models
08 2 0.6 1.5 0 1 - as 0.2 0.5 train/global_step 200 400 606 800 1k 200 400 600 eval/accuracy egment_partial_ft_3_layers Ternary Segment ment_partia Ternary Segment ent_finetune_decoder Ternary â ternary segment fnetune aformer Ternary Segment eval/fl_score â ¢ ternary_segment_partial_ft_3_layers Ternary Segm it_partial_ft_t_ ent_finetune_dec â ¢ ternary t_t_layer train/global_step 200 400 600 800 1k 200 train/loss nt_partial_ft_3_layers Ternary ment_partial_ft_l_layer Ternary Seg segment finetune_decoder 7 â ternary sezment fnetune aformer Ternary Segment â ternary tern globat-ste train/global_step 0 - ( 200 400 600 800 1k 200 400 Figure 14: Ternary Classification:
2308.06394#50
2308.06394#52
2308.06394
[ "2302.04023" ]
2308.06394#52
Detecting and Preventing Hallucinations in Large Vision Language Models
Sentence-level model Training and Evaluation loss, Evaluation F-1 Score and Ac- curacy. Figure 15: Ternary Classification: Segment-level model Train- ing and Evaluation loss, Evaluation F-1 Score and Accuracy. LLaVA based question prompts â Can you describe the main features of this image for me? <image>" â Explain the visual content of the image in great detail. <image>"' â <image> Describe the following image." â Analyze the image in a comprehensive and detailed manner. <image>' â <image> What are the key elements in this picture?" "What do you see happening in this image? <image>' "<image> What's happening in the scene?" "What are the key elements in this picture? <image>' â Write a detailed description of the given image. <image>' '<image> Explain the visual content of the image in great detail.â â Can you elaborate on the elements of the picture provided? <image>' "What is this photo about'? <image>" "Describe the following image. <image>' â <image> Analyze the image in a comprehensive and detailed manner.â â <image> What do you see happening in this image?" â <image> Write a detailed description of the given image.' â <image> Can you elaborate on the elements of the picture provided?â â <image> What do you think is going on in this snapshot?" â <image> Can you describe the main features of this image for me?â "What do you think is going on in this snapshotâ ? <image>' "<image> What is this photo about'?" â What's happening in the scene?â <image>" Custom question prompts â <image> Please provide a detailed description of the image. Describe the visual elements, colors, shapes, textures, and any objects or people present along with the overall mood or atmosphere portrayed in the image.â
2308.06394#51
2308.06394#53
2308.06394
[ "2302.04023" ]
2308.06394#53
Detecting and Preventing Hallucinations in Large Vision Language Models
â <image> Please provide a detailed description of the image, including its visual elements, such as colors, shapes, textures, objects, and people." â <image> Provide an intricate description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any people present.â â <image> Delve into the details of the image and compose a comprehensive description, incorporating its visual aspects like colors, shapes, textures, objects, and individuals.â '<image> Craft an elaborate depiction of the image, highlighting its visual components such as colors, shapes, textures, objects, and the presence of any individuals.â â <image> Compose a detailed account of the image, encompassing its visual characteristics, like colors, shapes, textures, objects, and any human subjects, by paying careful attention to the specifics." â <image> Compose a comprehensive description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any individuals. In addition, depict the activities or interactions happening within the scene and provide an overall analysis of the composition, atmosphere, and narrative conveyed." "Please provide a detailed description of the image. Describe the visual elements, colors, shapes, textures, and any objects or people present along with the overall mood or atmosphere portrayed in the image. <image>' Please provide a detailed description of the image, including its visual elements, such as colors, shapes, textures, objects, and people. <image>' â Provide an intricate description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any people present. <image>' â Delve into the details of the image and compose a comprehensive description, incorporating its visual aspects like colors, shapes, textures, objects, and individuals. <image>' â Craft an elaborate depiction of the image, highlighting its visual components such as colors, shapes, textures, objects, and the presence of any individuals. <image>' â Compose a detailed account of the image, encompassing its visual characteristics, like colors, shapes, textures, objects, and any human subjects, by paying careful attention to the specifics. <image>' â Compose a comprehensive description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any individuals. In addition, depict the activities or interactions happening within the scene and provide an overall analysis of the composition, atmosphere, and narrative conveyed.
2308.06394#52
2308.06394#54
2308.06394
[ "2302.04023" ]
2308.06394#54
Detecting and Preventing Hallucinations in Large Vision Language Models
<image>' 'Give a detailed description of the image. <image>Write a detailed description of the given image. <image>' Figure 16: List of prompts used for detail generation responses from InstructBLIP.
2308.06394#53
2308.06394
[ "2302.04023" ]
2308.05696#0
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
3 2 0 2 g u A 0 1 ] L C . s c [ 1 v 6 9 6 5 0 . 8 0 3 2 : v i X r a # A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment Yingxiu Zhao1, Bowen Yu2â , Binyuan Hui2, Haiyang Yu2, Fei Huang2, Yongbin Li2â , Nevin L. Zhang1 1 The Hong Kong University of Science and Technology, 2 Alibaba Group {yzhaocx,lzhang}@connect.ust.hk, {yubowen.ybw,binyuan.hby,yifei.yhy,f.huang,shuide.lyb}@alibaba-inc.com # Abstract
2308.05696#1
2308.05696
[ "2307.12966" ]
2308.05696#1
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose tree-instruct to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key2.
2308.05696#0
2308.05696#2
2308.05696
[ "2307.12966" ]
2308.05696#2
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
# Introduction The latest generation of large language models (LLMs) has attracted significant attention due to their immense potential in language technologies [26, 37, 44, 20]. To enhance interactive user requests and chat interfaces, these models undergo instruction-tuning using supervised input-output pairs [16, 17, 10]. This process enables the model to comprehend the required style and format for effective user interaction, showcasing the knowledge and capabilities gained during pre-training [28]. Consequently, the efficacy of instruction data significantly influences LLMsâ abilities, shaping usersâ perceptions of their capabilities [43, 19, 9]. Recently, LIMA has demonstrated that with just 1000 carefully curated prompts and responses, an LLM can achieve remarkably strong performance [48]. This suggests that the scaling laws of instruction tuning are not solely dependent on data quantity but rather influenced by prompt diversity and quality. However, one critical and less-explored aspect
2308.05696#1
2308.05696#3
2308.05696
[ "2307.12966" ]
2308.05696#3
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
â Correspondence to: Bowen Yu <[email protected]>, Yongbin Li <shuide.lyb@alibaba- inc.com>. 2The data and code of this work are available at https://github.com/AlibabaResearch/ DAMO-ConvAI/tree/main/tree-instruct Preprint. Under review. 70; g yy Tree 151 Fs helpful_base $500 @ alpaca = | â ®~ self-instruction td % â Wizard-LM rm] 7 60). asst â +â koala t 50) 40) â ) 30 a 20 23.05 3 é 70 10 3 é i0 Added Nodes Number Added Nodes Number Figure 1: The scaling law of instruction complexity. We experiment with enhancing the complexity of semantic trees for 1,000 Alpaca instructions by adding extra 3, 6, and 10 nodes. We then evaluate models fine-tuned on instruction data of varying complexities against text-davinci003 in terms of win rate on AlpacaEval (Left). Additionally, we examine win rates on different subsets of AlpacaEval (Right). In the left figure, we indicate the average token count for instructions of different complexity levels. We also use WizardLMâ s in-depth deepening as the baseline. of evaluating instruction data is complexity. There are at least three unanswered questions related to complexity: (1) Scaling law of complexity: Intuitively, more complex instruction data might elicit more potential capabilities in LLMs to address intricate problems [23, 25]. WizardLM [45] introduce in-depth and in-breadth evolving methods to rewrite prompts into more complex and diverse versions, resulting in a 12.4% increase in LLMsâ win rate with the same amount of data. Yet, whether WizardLMâ s performance improvement is due to complexity or merely derived from diversity remains uncertain. Moreover, the ongoing enhancements in complexity are yet to be explored. (2) Relationship between complexity-induced performance improvement and token quantity: Enhancing instance complexity inevitably increases the number of tokens per instance [11]. While WizardLM exhibits performance improvements with the same instance quantity, it increases the number of tokens per instance. This raises the question of whether complexity-induced improvement in LLMs results from increased training tokens. As known, enlarging LLMsâ
2308.05696#2
2308.05696#4
2308.05696
[ "2307.12966" ]
2308.05696#4
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
pretraining token counts can lead to better performance [24, 36]. (3) Effectiveness of complexity-based curriculum instruction learning: Curriculum learning is a strategy in machine learning that starts with easy instances and gradually introduces harder ones [4]. Its effectiveness has been demonstrated in various NLP tasks like machine translation [49], dialogue [50], and question answering [31]. However, its potential efficacy in instruction tuning is under-explored. However, to answer the aforementioned questions, the key hurdle lies in finding a controlled way to increase the complexity of instruction data without introducing unwanted factors such as diversity. WizardLM [45] employs an in-depth evolving prompt like â Your objective is to rewrite a given prompt into a more complex version to make ChatGPT and GPT4 a bit harder to handle.â to complicate the existing instructions. Unfortunately, although intended to enhance complexity, this approach might inadvertently introduce diversity by diverting from the initial instruction objectives. This issue becomes particularly severe when repeatedly employing in-depth evolving to achieve varying levels of complexity. We study and analyze the instructions before and after in-depth evolving in Sec. 4.1. As illustrated in Fig. 2, the iteratively evolved instructions append additional objectives that deviate from the original instructions, showcasing a greater diversity. To address this concern, we propose Tree-Instruct, which involves prompting LLMs to add a specific number of new nodes to the semantic tree of an existing instruction, as opposed to manipulating the text sequence directly, as done in Self-Instruct [41] or WizardLM [45]. We use the number of added nodes to represent the introduced level of complexity. The advantage of this approach lies in the fact that semantic tree nodes lack any sequential order [32]. By enforcing LLMs to operate on the
2308.05696#3
2308.05696#5
2308.05696
[ "2307.12966" ]
2308.05696#5
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
2 semantic tree, this process becomes analogous to inserting new words into the middle of the original instructions. This compels the models to complicate while adhering to the structural constraints of the initial instruction rather than merely appending new instructions. It can significantly mitigate the issue of straying from the primary theme of the initial instruction. We leverage GPT-4 to assess the consistency of evolved instructions with original ones, and the results verify that Tree-Instruct improves WizardLMâ s consistency score from 0.56 to 0.69. Fig. 1 highlights how the number of added nodes raises the complexity level of the samples. With the help of Tree-Instruct, we have obtained the following preliminary experimental conclusions: (1) As the complexity of the instruction data increases, the benefits of instruction tuning continue to grow: Following LIMA, we attempt instruction tuning using 1,000 samples from Alpaca-GPT-4 as a base. We add 3, 6, and 10 nodes to the semantic tree of each sample, resulting in performance gains of 14%, 18%, and 24%, respectively, across eight sub-skills such as commonsense, writing, and coding, showing consistent improvements. Furthermore, this scaling law can be extended to more complex instruction data. For instance, when fine-tuning 6,000 conversations filtered from ShareGPT via OpenChat[38] (showing excellent performance in the open-source LLMs), we observe that by increasing the complexity to around 3,000 usersâ instructions, the winning rate increases from 80.87% to 82% on the AlpacaEval leaderboard3. (2) The increase in complexity partly comes from additional tokens, but a few complex instruc- tions outperform diverse yet simple instructions, under the same token budget.: We find that as the complexity increases, the number of tokens also increases. Adding 10 nodes in the tree increases the average token length of samples from 186 to 607. Hence, to make a fair comparison, we increase the number of original instructions from 1,000 to 4,000 to match the total token quantity of our tree-instructed samples. Under this setting, the performance gain from adding 10 nodes still achieves more than 20%.
2308.05696#4
2308.05696#6
2308.05696
[ "2307.12966" ]
2308.05696#6
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
This indicates that the improvement due to complexity is partly attributed to the increased tokens, but increasing the complexity of samples is equivalent to the diversity achieved by four times the token count of simple samples. Moreover, when considering the same token count, instructions evolved from Tree-Instruct exhibit a 5% higher win rate compared to in-depth deepening of WizardLM, making it a more effective method for increasing complexity. (3) Curriculum instruction tuning may not be effective; increasing complexity is all you need: We try curriculum learning by gradually training samples on harder samples, i.e., first train on data with added three nodes, then six nodes, and finally ten nodes. We observe that, with the same training steps, the curriculum learning approach does outperform training with a mixed difficulty of samples but still falls short compared to directly training with the added ten-nodes samples. This indicates that when we have more complex samples, the significance of simpler samples diminishes significantly, suggesting that repeating training with complex samples may be sufficient. # 2 Related Work Large Language Models (LLMs), trained on extensive textual datasets, have risen as premier solutions for a diverse array of NLP tasks [47]. Despite their remarkable performance, these models are not without their limitations. These limitations encompass potential misunderstandings of human instructions, the propensity to generate biased content, and the sporadic generation of hallucinated information. Consequently, bringing LLMs in line with human expectations has become a central focal point within the research community [3, 34]. To attain this alignment, researchers need to amass high-quality instructional data that authentically mirrors human needs and expectations. A rational starting point for data collection involves the adaptation of existing NLP benchmarks into natural language instructions, like PromptSource [2], SuperNaturalInstruction [42], Unnatural Instructions [15] and FLAN [21] are spearheading this strategy. These benchmarks encompass a wide range of NLP tasks, spanning dialogue, reasoning, and coding, all unified under the realm of language instructions.
2308.05696#5
2308.05696#7
2308.05696
[ "2307.12966" ]
2308.05696#7
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
TÃ LU[40] showcases that instructions from NLP tasks significantly bolster the reasoning prowess of aligned LLMs, where the diversity of tasks plays a pivotal role in shaping the capabilities of LLMs. Nevertheless, a notable trend in NLP datasets is their propensity to emphasize particular skills, consequently yielding instructions that possess a somewhat confined scope. This constraint has # 3https://tatsu-lab.github.io/alpaca_eval/ 3 # Initial instruction: Implementing effective strategies to curb environmental pollutants in the atmosphere. NN strategies # Tree-10-nodes instruction: Implement effective strategies to curb environmental pollutants in the atmosphere at different altitudes by reducing emissions from industrial sources like factories and vehicles. Additionally, monitor these emissions using specialized equipment and stringently enforce regulations to ensure industries adhere to best practices and environmental standards.
2308.05696#6
2308.05696#8
2308.05696
[ "2307.12966" ]
2308.05696#8
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
{imptemning ( effective =) â â â â (rena) ¢ ab) ) Cm) wien) â ete a Oo NN egulatior Ne (â ) a Industrial NNS sources if # WizardLM Deepening Evolve-iteration-3: Investigating and formulating intricate methodologies, deeply anchored in cutting-edge quantum and classical scientific principles, to systematically and holistically reduce, monitor, and assess both primary and secondary atmospheric environmental pollutants. This approach is crucial for ensuring sustainable socio-economic progress while actively safeguarding and nurturing our planet's delicate ecological balance.
2308.05696#7
2308.05696#9
2308.05696
[ "2307.12966" ]
2308.05696#9
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Figure 2: The instruction generated by different evolving methods: Tree-instruction after adding ten nodes and WizardLM by iteratively deepening three times. We also demonstrate how Tree- Instruct enhances the complexity of the original instructionâ s semantic tree by introducing three nodes (orange), six nodes (green), and ten nodes (purple). 4 the potential to impede their capacity to meet the intricate requirements of real-world applications. In order to tackle these challenges, one possible approach is to formulate instructions via purpose- ful human annotations. An exemplary precursor to such a corpus is OpenAssistant [19], which comprises over 10k dialogues involving the participation of 13k annotators from around the world. Another remarkable venture into harnessing human-generated instructions through crowd-sourcing is ShareGPT 4. This platform encourages users to contribute and exchange their engaging conversations with ChatGPT and GPT4. While human annotation ensures both quality and diversity, it becomes challenging to ensure the quantity and complexity of instructional data due to the highly expensive annotation process [7], and the distribution of difficulty levels in human-created instructions tends to skew towards being either easy [23]. To address this issue, Self-Instruct [41] leverages ChatGPTâ s in-context learning capability to generate a large volume of instructions from a predefined set of human-annotated instructions spanning diverse topics and task types. Building upon this foundation, LIMA [48] and Alpagasus [5] separately validate the significant impact of data diversity and quality on instructional effectiveness. The selection of thousands of high-quality and diverse instructional examples proves more advantageous in achieving better results compared to using the entire dataset. Further increasing the number of instructions could potentially induce a semantic shift in the LLMs [1]. Up to this point, three key metrics within the instructional dataâ diversity, quality, and quantityâ have been elucidated for their impact on tuning, though exploration into complexity remains insufficient. While WizardLM [45] demonstrates that evolving both the complexity and diversity of instructions can lead to performance enhancement, it does not deeply investigate the individual importance of complexity. This paper introduces a method, Tree-Instruct, which enhances instructional complexity while simultaneously constraining thematic consistency to mitigate variations in diversity. Our experiments preliminarily establish a scaling law regarding complexity, show that the improvement resulting from increased complexity isnâ
2308.05696#8
2308.05696#10
2308.05696
[ "2307.12966" ]
2308.05696#10
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
t solely due to the introduction of more training tokens and illustrate that LLMs only require complex samples for instruction tuning, rather than simple samples serving as foundational padding for curriculum learning. # 3 Tree-Instruct Enhancing the complexity of natural language text seems like a straightforward task for proficient LLMs. For instance, WizardLM utilizes a simple text prompt to complexify instructions as mentioned in Sec. 1. However, due to the extensive pre-training of LLMs on massive corpora, where models predict the next token based on the preceding context, weâ ve noticed that LLMs can often exploit the given instruction by simply continuing the text beyond the initial prompt to artificially amplify complexity. While adding continuation constraints can enhance the complexity of instructions, it simultaneously leads them away from the core thematic focus. This divergence expands the topic and domain, fostering diversity that hinders our ability to solely assess the impact of increased instruction complexity. We leverage GPT-4 to automatically score the consistency (range in 0 â ¼ 1) of the instructions before and after implementing in-depth deepening following WizardLM. We found that it only gets a 0.56 alignment score. Furthermore, upon iteratively enhancing the instructionâ s complexity, the guidance might become ineffective, losing its original essence. For instance, it might cease to present a question, rendering it arduous for the LLM to generate a suitable response. This phenomenon matches with observations made by WizardLM, which prompts them to introduce the Elimination Evolving procedure. To address this issue, we first consider what determines the complexity of natural language text. In linguistics and education, there is a lack of precise scientific consensus on determining the complexity of the text. No single source can precisely summarize a textâ s complexity. Currently, a widely accepted perspective suggests that qualitative measures of text complexity require an informed judgment of text difficulty based on various factors. The standards use factors like purpose, levels of meaning, structure, language conventions, clarity, and knowledge demands to measure text difficulty 5. Among these, text structure is a more measurable indicator, as we can convert text sequences into tree structures using mature dependency or semantic tree parsers [33]. Tree structures, prevalent in natural language representations, offer structural insights reflecting human text comprehension [14].
2308.05696#9
2308.05696#11
2308.05696
[ "2307.12966" ]
2308.05696#11
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
# 42https://sharegpt.com/ 5https://www.generationready.com/wp-content/uploads/2021/04/ Beginners-Guide-to-Text-Complexity.pdf 5 Furthermore, we can gauge text complexity accurately by measuring the width and depth of trees, as a deeper and wider grammar tree signifies more intricate sentence structures [8, 39]. Inspired by the concept of tree complexity, we propose Tree-Instruct, wherein LLMs directly add a specific number of nodes to the semantic tree of an instruction. This increases the treeâ s width and depth, thereby enhancing text structure complexity. In detail, Tree-Instruct encompasses three steps: Step 1: Tree Construction involves semantic parsing, where a structured representation is created from a natural language sentence. This process yields a tree structure for an instruction. For instance, given the instruction â
2308.05696#10
2308.05696#12
2308.05696
[ "2307.12966" ]
2308.05696#12
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Implementing effective strategies to curb environmental pollutants in the atmosphereâ , we derive an original tree structure Tree-1 as shown in the first tree of Fig. 2. Step 2: Nodes Expansion operates on the acquired tree structure, expanding it in depth or width by adding new nodes, thus influencing the new treeâ s complexity. We only add meaningful nodes representing nouns or verbs, since words like adjectives or prepositions contribute little to tree complexity. The second tree in Fig. 2 illustrates Tree-2 after adding ten nodes. Step 3: Tree Sentenceization aims to make LLMs revert the complex new tree structure (Tree-2) back to fluent natural language instruction by introducing connected words. Prompt for Tree-Instruct You are an instruction rewriter. You need to rewrite a given user instruction following Procedures step by step. You MUST ONLY return the NEW instruction you rewrite. Procedure: step-1: Parse the old â instructionâ to a TREE-1 through Semantic Parsing in the natural language processing field. step-2: EXPAND the above NEW TREE-1 from DEPTH or WIDTH by ADDING â your_added_numberâ meaningful NEW NODEs as nouns or verbs to form a NEW TREE-2. The new nodes should be constructed with detailed and pertinent information. step-3: Generate a totally NEW â instructionâ based on the expanded NEW TREE-2. # Old instruction: â your_instructionâ # New instruction: Additionally, we present all three steps into a single prompt, guiding LLMs to implement our require- ments step by step without external semantic parsing tools (see Block 3, where â
2308.05696#11
2308.05696#13
2308.05696
[ "2307.12966" ]
2308.05696#13
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
your_added_numberâ indicates the desired number of nodes we aim to add to the tree.) Especially, we directly control the complexity by adjusting â your_added_numberâ . Visually, with more nodes added, the tree and the instruction become more complex. This gradual increase results in a tree with 3, 6, or 10 additional nodes, progressively increasing the complexity of instructions, as shown in Fig. 2. We also observe that adding nodes to the semantic tree constructs a framework for the original instruction. This approach prevents significant deviations from the main topic.
2308.05696#12
2308.05696#14
2308.05696
[ "2307.12966" ]
2308.05696#14
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
GPT-4â s automatic assessment shows that our prompt modifications maintain thematic consistency with a score 0.69. # 4 Experiments In this experiment, our primary objective is to address four key research questions: (1) Can Tree- Instruct, compared to WizardLMâ s in-depth evolving, better maintain thematic consistency while augmenting complexity? (2) Does increasing the complexity of instructions through Tree-Instruct result in a greater unleashing of LLMâ s latent potential, i.e., will more intricate instructions yield better outcomes? (3) Given the same token constraints, which approach is better suited for instruction tuning: employing complex yet limited instruction data or opting for simpler but more diverse instructions? (4) Can curriculum-based instruction tuning methods (from simpler to more complex instruction data) yield improvements similar to the substantial enhancements observed in many previous NLP tasks? Our primary experiments are conducted on Alpaca GPT-4 dataset [29], which contains a dataset of 52,000 instruction-following examples responded to by GPT-4 using prompts in Alpaca [35].
2308.05696#13
2308.05696#15
2308.05696
[ "2307.12966" ]
2308.05696#15
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
6 Following LIMA, we randomly select 1,000 instruction samples to form Alpaca-1K, serving as the starting point for our evolutionary process. We query gpt-4 [27] to execute Tree-Instruct, thereby increasing the complexity of each instruction within Alpaca-1K. In order to analyze the scaling law, we introduce three levels of complexity by augmenting the instructions by adding 3, 6, and 10 additional nodes, respectively. This allows us to observe the impact of these varying complexities on the outcomes. For the modified instructions, we employed gpt-4 once again to generate corresponding responses. To validate our findings, we replicate the results by applying the in-depth evolving with deepening prompt provided by WizardLM to the same Alpaca-1K instructions. To demonstrate the scalability of our discoveries to larger datasets, we also conduct experiments on the expansive OpenChat dataset [38]. We employ the pre-trained LLaMA-13B-v1 [37] model as the initialization, fine-tuning it on instruction-tuning datasets generated through different methods. Each GPU processes batches of size 2 (for OpenChat evolved data, the batch size is set to 14), and the maximum sequence length was set to 2048. For optimization, we adopt the AdamW [22] optimizer with a learning rate of 1e-4 and a weight decay of 0.1, following the practices established by OpenChat. The training is performed across 8 A100 GPUs using Deepspeed ZeRO-2 for a duration of 3 epochs. During inference, a temperature of 0.7 and a top-p value of 0.9 are employed to evaluate all the methods under comparison. # 4.1 Tree-Instruct is Better than In-Depth Evolving We start by investigating whether operating on a tree, as opposed to a sequence, better aligns with the intended objectives of the original instruction.
2308.05696#14
2308.05696#16
2308.05696
[ "2307.12966" ]
2308.05696#16
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Recent studies have introduced the LLMs-as- evaluator paradigm, leveraging LLMs to assess candidate samples, which closely approximates human evaluative agreement [6, 13, 18, 46]. Consequently, we employ gpt-4 to gauge which approach exhibits greater consistency with the initial instructions. As depicted in Figure 3, the result indicates that employing Tree-Instruct, which entails adding instructions with 6 additional nodes, achieves a higher degree of alignment with the original instructions in 63% of cases, compared to WizardLMâ s in-depth deepening that undergoes modifications and generates instructions with similar token quantities to Tree-6-nodes. This observation serves as evidence that the presence of a tree structure constraint enables LLMs to more effectively modify instructions within the framework of the original guidance, rather than diverging and incorporating unrelated content.
2308.05696#15
2308.05696#17
2308.05696
[ "2307.12966" ]
2308.05696#17
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
a ; 30 me Tee wins lm WizardLM wins Furthermore, our findings demonstrate that Tree- Instruct is more effective than in-depth evolving in eliciting the capabilities of LLMs. We con- duct evaluations on the AlpacaEval evaluation set for both methods. AlpacaEval is a recent au- thoritative leaderboard comprising 805 diverse samples, each showcasing various abilities. The evaluations are performed with gpt-4 as the evaluator, comparing the win rates of models against text-davinci003. As depicted in Table 2, under similar total token counts, Tree-Instruct exhibits a win rate improvement of 5 points over WizardLMâ
2308.05696#16
2308.05696#18
2308.05696
[ "2307.12966" ]
2308.05696#18
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
s in-depth deepening. We attribute this enhancement to Tree-Instructâ s adeptness at closely tailoring instructions to the central topic, thereby introducing complexity. In contrast, in-depth evolving might deviate from the original theme and introduce irrelevant content, resulting in instructions of inadequate difficulty. Such instructions could potentially hinder LLMs from generating appropriate responses, rendering them less effective in the generation process. # 4.2 More Complexity, Better Capability After demonstrating the effectiveness of Tree-Instruct in enhancing sample complexity, we present a scaling law pertaining to complexity, as depicted in Fig. 1 and Table 2. As the number of nodes gradually increases from Tree-3-Nodes to Tree-6-Nodes and further to Tree-10-Nodes, the modelâ s win rate on AlpacaEval, exhibits a remarkable upward trend. This scaling law underscores the significance of complexity within instruction data.
2308.05696#17
2308.05696#19
2308.05696
[ "2307.12966" ]
2308.05696#19
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
7 Table 1: Win rate of different methods vs. text-davinci003 on the AlpacaEval leaderboard. Method Win Rate (%) Token Length GPT4 LLaMA2-Chat-70B Claude-2 OpenChat-13B-V3.1 ChatGPT WizardLM-13B-V1.2 OpenChat-13Bâ UltraLM-13B WizardLM-13B 95.28 92.66 91.36 89.49 89.37 89.17 80.87 80.64 75.31 1365 1790 1069 1484 827 1635 1632 1087 985 OpenChat-13B+Tree-Instruct 82.00 (+1.13) 1543
2308.05696#18
2308.05696#20
2308.05696
[ "2307.12966" ]
2308.05696#20
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Additionally, we carry out a meticulous evaluation for each skill/category within the Vicuna test sets. These sets are divided into distinct skill sets/categories, allowing for an intricate analysis of the profi- ciency attained through instruction tuning. Notably, Tree-10-Nodes outperforms Tree-6-Nodes across a majority of categories, encompassing Counterfactual, Roleplay, Knowledge, Generic, and more. Similar trends are evident when comparing Tree- 6-Nodes with the original instructions, indicat- ing that augmenting the complexity of Instruc- tion data leads to a comprehensive enhancement in the capabilities of the LLM.
2308.05696#19
2308.05696#21
2308.05696
[ "2307.12966" ]
2308.05696#21
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
= text-davinci-003 â â Tree-10-nodes â â Tree-6-nodes â Alpaca (Tree-0-nodes) counterfactual coding 1009 common-sénse roleplay â generic knowledge Finally, given that our experimentation is based on 1,000 instances, we extend our investiga- tion to validate the effectiveness of Tree-Instruct across a larger dataset using OpenChat. Open- Chat is built upon approximately 6K GPT-4 con- versations derived from around 90K ShareGPT conversations. It has notably achieved top rank- ings as an open-source LLM. As we initiate these experiments, OpenChat attains an 80.87% win rate on AlpacaEval. Since OpenChat in- volves multi-turn conversations, we specifically complexify instructions within single-turn con- versations and certain meaningful concluding turns, rather than those containing generic terms like "stop" or "continue." This modification en- compasses 3,000 conversations.
2308.05696#20
2308.05696#22
2308.05696
[ "2307.12966" ]
2308.05696#22
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Figure 4: Evaluation of models trained on Alpaca-1K added with various nodes vs. text- davinci003 on categories of the Vicuna test set. As delineated in Table 1, following the com- plexification of Tree-Instruct, we enhance Open- Chatâ s performance from 80.87% to 82.00%, underscoring the sustained efficacy of our approach across a larger volume of data. # 4.3 Less but Complex is Better Than More but Simple While we have demonstrated that increasing the complexity of instruction data can enhance the capabilities of LLMs, a new question arises: Is this improvement due to the introduction of more training tokens as complexity increases? Our analysis indicates that the average length of the original Alpaca data, combining both input and output, is 186 tokens. Upon incorporating an additional 10 nodes, this count escalates to 607 tokens â equivalent to a 3.26-fold increase in training data. With this question in mind, we introduce a new baseline: Alpaca-4K, trained with 4,000 samples (additionally sampled 3,000 instances from the original Alpaca data). As shown in Table 2, Alpaca-4Kâ s total token count surpasses that of Tree-10-Nodes by 24%. Despite this, with the same training step, a significant 22% performance gap in win rate remains. However, compared to Alpaca-1K, there is indeed a 2% improvement. This suggests that introducing more instruction tokens does enhance
2308.05696#21
2308.05696#23
2308.05696
[ "2307.12966" ]
2308.05696#23
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
8 Table 2: Analysis of the complexity scaling laws and win rate-token count relationship. Method Win Rate (%) Total Token Size Alpaca-1K Alpaca-4K WizardLM 26.40 28.60 40.37 186,464 757,042 556,981 Tree-3-Nodes Tree-6-Nodes Tree-10-Nodes 40.80(+14.40) 44.78(+18.38) 50.19(+23.79) 385,760 546,731 608,556 Table 3: Analysis of mixed difficulty training and curriculum Learning. The numbers demonstrate the win rates on various subsets and the overall AlpacaEval test set, vs. text-davinci003. Method helpful-base self-instruction oasst koala vicuna Overall Mix-training Hard-to-Easy Curriculum Easy-to-Hard Curriculum 43.41 49.61 52.71 25.39 22.22 26.98 36.70 43.62 49.47 40.38 41.02 41.02 35.00 46.25 50.00 34.78 37.69 41.37 Tree-3-nodes Tree-6-nodes Tree-10-nodes 50.38 55.81 67.44 26.58 29.76 31.74 46.81 52.13 53.19 42.31 46.15 54.48 52.50 53.75 65.00 40.80 44.78 50.19 model performance. Nonetheless, the effectiveness of diverse yet simple instructions still falls short compared to a smaller quantity of more complex directives. # 4.4 Curriculum Learning May Be Not Effective for Instruction Tuning Now, armed with three sets of data featuring increasing difficulty levels and aligned themes, we can delve into an unanswered question in instruction tuning: Is it necessary to train LLM progressively from easy to hard? As depicted in Table 3, we embark on a trial, initially training on Tree-3-Nodes data, followed by Tree-6-Nodes, and finally Tree-10-Nodes. Each segment constitutes one-third of the total training steps.
2308.05696#22
2308.05696#24
2308.05696
[ "2307.12966" ]
2308.05696#24
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
We also devise two baselines: one involving the combined training of all three difficulty levels and another wherein difficult samples were trained. prior to the easy ones. Experimental results reveal that, compared to mixed-difficulty training and training difficult samples before easier ones, an easy-to-hard curriculum learning approach truly enhances model performance. However, the performance gain from curriculum learning only slightly surpasses that of exclusively training on Tree-3-Nodes, the simplest dataset we construct. This outcome contrasts with previous observations of curriculum learning. We attribute this variance to the fact that modern LLMs possess parameter counts several times larger than those of earlier models like BERT [12] or T5 [30]. With this substantial parameter increase, LLMs are now capable of directly learning from challenging samples, diminishing the need for foundational exposure to simpler samples. The more exposure to challenging samples, the more the modelâ s capabilities are ignited.
2308.05696#23
2308.05696#25
2308.05696
[ "2307.12966" ]
2308.05696#25
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
# 5 Conclusion In this study, we have undertaken a preliminary exploration of the intrinsic relationship between instruction complexity and the ability to follow instructions. Our observations include: (1) As the complexity of the instruction data increases, the benefits of instruction tuning continue to amplify. (2) The rise in complexity is partly attributed to additional tokens, yet a few intricate instructions outperform a range of simpler instructions, all within the same token limit. (3) A curriculum-based instruction tuning, progressing from easier to harder, might not yield the desired effectiveness; embracing increased complexity proves essential. We anticipate that this exploration will supplement existing knowledge regarding the aspects of quality, quantity, diversity, and complexity of instruction data. This contribution aims to assist future researchers in constructing superior instruction data.
2308.05696#24
2308.05696#26
2308.05696
[ "2307.12966" ]
2308.05696#26
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
9 # References [1] Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, and Melisa Russak. Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning. arXiv preprint arXiv:2307.03692, 2023. [2] Stephen H Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, et al. Promptsource: An integrated development environment and repository for natural language prompts. arXiv preprint arXiv:2202.01279, 2022. [3] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. [4] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41â 48, 2009. [5] Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, et al.
2308.05696#25
2308.05696#27
2308.05696
[ "2307.12966" ]
2308.05696#27
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Alpagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701, 2023. [6] Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study, 2023. [7] Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, et al. Phoenix: Democratizing chatgpt across languages. arXiv preprint arXiv:2304.10453, 2023. [8] Fanny Chevalier, David Auber, and Alexandru Telea. Structural analysis and visualization of c++ code evolution using syntax trees. In Ninth international workshop on Principles of software evolution: in conjunction with the 6th ESEC/FSE joint meeting, pages 90â 97, 2007. [9] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al.
2308.05696#26
2308.05696#28
2308.05696
[ "2307.12966" ]
2308.05696#28
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. [10] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. [11] Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, and Xiaodan Zhu.
2308.05696#27
2308.05696#29
2308.05696
[ "2307.12966" ]
2308.05696#29
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Preview, attend and review: Schema-aware curriculum learning for multi-domain dialog state tracking. arXiv preprint arXiv:2106.00291, 2021. [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. [13] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023. [14] Julia Hancke, Sowmya Vajjala, and Detmar Meurers. Readability classification for german using lexical, syntactic, and morphological features. In Proceedings of COLING 2012, pages 1063â
2308.05696#28
2308.05696#30
2308.05696
[ "2307.12966" ]
2308.05696#30
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
1080, 2012. [15] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689, 2022. [16] Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017, 2022.
2308.05696#29
2308.05696#31
2308.05696
[ "2307.12966" ]
2308.05696#31
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
10 [17] Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, and Minjoon Seo. Exploring the benefits of training expert language models over instruction tuning. arXiv preprint arXiv:2302.03202, 2023. [18] Yunjie Ji, Yan Gong, Yiping Peng, Chao Ni, Peiyan Sun, Dongyu Pan, Baochang Ma, and Xiangang Li. Exploring chatgptâ
2308.05696#30
2308.05696#32
2308.05696
[ "2307.12966" ]
2308.05696#32
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
s ability to rank content: A preliminary study on consistency with human preferences, 2023. [19] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, et al. Ope- nassistant conversationsâ democratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023. [20] Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li.
2308.05696#31
2308.05696#33
2308.05696
[ "2307.12966" ]
2308.05696#33
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Api-bank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244, 2023. [21] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. [22] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. [23] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568, 2023. [24] Niklas Muennighoff, Alexander M Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Nouamane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. Scaling data-constrained language models. arXiv preprint arXiv:2305.16264, 2023. [25] Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah.
2308.05696#32
2308.05696#34
2308.05696
[ "2307.12966" ]
2308.05696#34
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707, 2023. [26] OpenAI. Introducing chatgpt. 2022. [27] R OpenAI. Gpt-4 technical report. arXiv, pages 2303â 08774, 2023. [28] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2308.05696#33
2308.05696#35
2308.05696
[ "2307.12966" ]
2308.05696#35
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â 27744, 2022. [29] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. [30] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.
2308.05696#34
2308.05696#36
2308.05696
[ "2307.12966" ]
2308.05696#36
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Exploring the limits of transfer learning with a unified text-to-text transformer, 2020. [31] Mrinmaya Sachan and Eric Xing. Easy questions first? a case study on curriculum learning for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 453â 463, 2016. [32] Vighnesh Shiv and Chris Quirk. Novel positional encodings to enable tree-based transformers. Advances in neural information processing systems, 32, 2019.
2308.05696#35
2308.05696#37
2308.05696
[ "2307.12966" ]
2308.05696#37
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
[33] Valery Solovyev, Marina Solnyshkina, Vladimir Ivanov, and Ivan Rygaev. Computing syntactic parameters for automated text complexity assessment. In CEUR Workshop Proceedings, volume 2475, pages 62â 71, 2019. [34] Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. Preference ranking optimization for human alignment. arXiv preprint arXiv:2306.17492, 2023.
2308.05696#36
2308.05696#38
2308.05696
[ "2307.12966" ]
2308.05696#38
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
11 [35] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. [36] Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. Transcending scaling laws with 0.1% extra compute. arXiv preprint arXiv:2210.11399, 2022. [37] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.
2308.05696#37
2308.05696#39
2308.05696
[ "2307.12966" ]
2308.05696#39
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Llama: Open and efficient foundation language models, 2023. [38] Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu. OpenChat: Advancing Open-source Language Models with Imperfect Data, 7 2023. [39] Xiangli Wang, Yi Zhang, Yusuke Miyao, Takuya Matsuzaki, and Junâ ichi Tsujii. Deep context- free grammar for chinese with broad-coverage. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 11â
2308.05696#38
2308.05696#40
2308.05696
[ "2307.12966" ]
2308.05696#40
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
19, 2013. [40] Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023. [41] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022. [42] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705, 2022.
2308.05696#39
2308.05696#41
2308.05696
[ "2307.12966" ]
2308.05696#41
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
[43] Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966, 2023. [44] Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, et al.
2308.05696#40
2308.05696#42
2308.05696
[ "2307.12966" ]
2308.05696#42
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Polylm: An open source polyglot large language model. arXiv preprint arXiv:2307.06018, 2023. [45] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023. [46] Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, and Yongbin Li. Wider and deeper llm networks are fairer llm evaluators. arXiv preprint arXiv:2308.01862, 2023. [47] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023. [48] Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al.
2308.05696#41
2308.05696#43
2308.05696
[ "2307.12966" ]