doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.05960
47
12 PREPRINT Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. In ArXiv, preprint. Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caim- ing Xiong, and Silvio Savarese. Retroformer: Retrospective large language agents with policy gradient optimization, 2023b. Jianguo Zhang, Kun Qian, Zhiwei Liu, Shelby Heinecke, Rui Meng, Ye Liu, Zhou Yu, Huan Wang, Silvio Savarese, and Caiming Xiong. Dialogstudio: Towards richest and most diverse unified dataset collection for conversational ai, 2023.
2308.05960#47
BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents
The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which facilitates the ability to resolve complex tasks by conditioning on past interactions such as observations and actions. Since the investigation of LAA is still very recent, limited explorations are available. Therefore, we provide a comprehensive comparison of LAA in terms of both agent architectures and LLM backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA, where a controller manages the communication among multiple agents. We conduct simulations on both decision-making and multi-step reasoning environments, which comprehensively justify the capacity of LAAs. Our performance results provide quantitative suggestions for designing LAA architectures and the optimal choice of LLMs, as well as the compatibility of both. We release our implementation code of LAAs to the public at \url{https://github.com/salesforce/BOLAA}.
http://arxiv.org/pdf/2308.05960
Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.AI
Preprint
null
cs.AI
20230811
20230811
[ { "id": "2204.02311" }, { "id": "2112.09332" }, { "id": "2307.13854" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "1802.08802" }, { "id": "2305.14992" }, { "id": "2306.06070" }, { "id": "2308.00675" }, { "id": "2302.07867" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2303.17651" } ]
2308.05960
48
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854, 2023. URL https://webarena.dev. 13
2308.05960#48
BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents
The massive successes of large language models (LLMs) encourage the emerging exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to generate actions with its core LLM and interact with environments, which facilitates the ability to resolve complex tasks by conditioning on past interactions such as observations and actions. Since the investigation of LAA is still very recent, limited explorations are available. Therefore, we provide a comprehensive comparison of LAA in terms of both agent architectures and LLM backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA, where a controller manages the communication among multiple agents. We conduct simulations on both decision-making and multi-step reasoning environments, which comprehensively justify the capacity of LAAs. Our performance results provide quantitative suggestions for designing LAA architectures and the optimal choice of LLMs, as well as the compatibility of both. We release our implementation code of LAAs to the public at \url{https://github.com/salesforce/BOLAA}.
http://arxiv.org/pdf/2308.05960
Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.AI
Preprint
null
cs.AI
20230811
20230811
[ { "id": "2204.02311" }, { "id": "2112.09332" }, { "id": "2307.13854" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "1802.08802" }, { "id": "2305.14992" }, { "id": "2306.06070" }, { "id": "2308.00675" }, { "id": "2302.07867" }, { "id": "2305.18323" }, { "id": "2307.12856" }, { "id": "2303.17580" }, { "id": "2305.15334" }, { "id": "2307.16789" }, { "id": "2304.03442" }, { "id": "2303.11366" }, { "id": "2303.17651" } ]
2308.06394
49
Wei, J.; Bosma, M.; Zhao, V. Y.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2021. Fine- tuned Language Models Are Zero-Shot Learners. CoRR, abs/2109.01652. Wu, Z.; Hu, Y.; Shi, W.; Dziri, N.; Suhr, A.; Ammanabrolu, P.; Smith, N. A.; Ostendorf, M.; and Hajishirzi, H. 2023. Fine-Grained Human Feedback Gives Better Rewards for Language Model Training. arXiv preprint arXiv:2306.01693. Ye, Q.; Xu, H.; Xu, G.; Ye, J.; Yan, M.; Zhou, Y.; Wang, J.; Hu, A.; Shi, P.; Shi, Y.; Li, C.; Xu, Y.; Chen, H.; Tian, J.; Qi, Q.; Zhang, J.; and Huang, F. 2023. mPLUG-Owl: Modulariza- tion Empowers Large Language Models with Multimodality. Zheng, R.; Dou, S.; Gao, S.; Hua, Y.;
2308.06394#49
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
50
Modulariza- tion Empowers Large Language Models with Multimodality. Zheng, R.; Dou, S.; Gao, S.; Hua, Y.; Shen, W.; Wang, B.; Liu, Y.; Jin, S.; Liu, Q.; Zhou, Y.; Xiong, L.; Chen, L.; Xi, Z.; Xu, N.; Lai, W.; Zhu, M.; Chang, C.; Yin, Z.; Weng, R.; Cheng, W.; Huang, H.; Sun, T.; Yan, H.; Gui, T.; Zhang, Q.; Qiu, X.; and Huang, X. 2023. Secrets of RLHF in Large Language Models Part I: PPO. arXiv:2307.04964. Ziegler, D. M.; Stiennon, N.; Wu, J.; Brown, T. B.; Radford, A.; Amodei, D.; Christiano, P. F.; and Irving, G. 2019. Fine- Tuning Language Models from Human Preferences. CoRR, abs/1909.08593.
2308.06394#50
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
51
# Data Annotation Annotation Portal We use Scale AI’s RAPID Annotation Portal (sca 2023). The annotators are provided with an image, question, and LM-generated detailed description of the image. For each sentence, the annotators mark parts of the sentence into ap- propriate cateogies: Accurate, Inaccurate, Analysis, Unsure. This is illustrated in Figure 8 Annotation Examples We present some examples from the M-HalDetect dataset in Figure 7. Class-wise density distribution For each sentence in the dataset (train split), we compute densities in the form of number of words in each sentence annotated into each of the three classes. This is illustrated in a histogram at Figure 10, where the x-axis represents the class presence within the sentence and the y-axis represents represents the number of sentences. We see that of the three classes, the Accurate class’s densi- ties are the least polar, while the Inaccurate class’s densities are the most polar with some slight bias towards lower densi- ties. This indicates that the sentences with inaccuracies are either fully inaccurate, or contain just a few words that are inaccurate. This matches up with the Accurate class’s slight bias towards higher densities, implying that most mixed-label sentences with inaccuracies tend to comprise of inaccurate and accurate material, not analysis.
2308.06394#51
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
52
As there is a high concentration of sentences that are fully categorized into one of the classes, we consider using sen- tence level representation of annotations as one of the reward model baselines. More details on the generation are deferred to Section . Researcher Agreement Figure 13 illustrates the class-level analysis of researcher agreement concerning the annotation task. Differing from human agreement, this assessment was conducted by two authors of the paper who possess expertise in the field of Natural Language Processing (NLP) and a comprehensive understanding of the practical use of the trained reward mod- els derived from this dataset. The study involves a comparison of independent annotations provided by two researchers for a consistent set of 10 images. Due to fine-grained nature of the annotation, there are some disagreements or subjectivity for annotating individual words, especially between the accurate and inaccurate classes. We performed qualitative analysis on the disagreements between the researchers or annotators and found a pattern in labelling differences rooting mostly between the classes (i) Accurate and Analysis, and (ii) Accurate and Inaccurate. The different interpretations of the image is attributed mainly to the subjectivity of this task or ambiguity in the descriptions. In addition, disparities in annotation can emerge when a single attribute of a phrase is incorrect. In such instances, some annotators might opt to flag the specific attribute as erroneous, while others could decide to label the entire phrase
2308.06394#52
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
53
as incorrect. Attributing to these challenges in the dataset and the subjectivity of this task, we can expect a reward model trained on this dataset to have a ceiling classification performance around the same range. # Training Details Model Freezing Ablations Reward Model We explore freezing different parts of the reward model during training, and report results in Table 3. • Finetune Decoder: The entire LLM Decoder and the Reward Model head is finetuned. • FT-Decoder 3 layers: The last 3 layers of LLM Decoder and the Reward Model head is finetuned, while keeping everything else frozen. • FT-Decoder 1 layer: The final layer of LLM Decoder and the Reward Model head is finetuned, while keeping everything else frozen. • Finetune Qformer: The InstructBLIP Qformer is fine- tuned along with the Reward Model head, while the de- coder is kept frozen. We initially explored only fine-tuning the reward head while keeping the entire model frozen, but found a significant drop in performance compared to all other methods of around 20% in both accuracy and F1 so we do not include it in our main results. This can be considered the performance baseline of the reward model.
2308.06394#53
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
54
We see that for Binary Classification, the fine- tuned decoder outperforms or is at par with all the other baselines. However, the performance gap between the fully fine-tuned decoder and partially fine-tuned models is not very significant. A similar trend is seen for Ternary Classification, but we observe a significant drop in performance for the finetuned Qformer. We theorize that this may be caused by fine-tuning a random initialized classifi- cation head at the end of the model at the same time as the QFormer present towards the start of the model. Improve- ments can be made on this to fine-tune the classification head first, before fine-tuning the QFormer, but we leave that to future work due to resource constraints. DPO While always fine-tuning the final unembedding layer of the decoder, we also study the effects of fine-tuning just the QFormer and fine-tuning just the top 3 layers of the decoder. We also explored fine-tuning the entire decoder in FDPO, but early experiments showed a heavy propensity towards overfitting and instable training, so we chose not to pursue that option. Training Hyperparameters We train all models for 10 epochs with a batch size of 16 per device with a learning rate of 2e-5. The training is done with fsdp full shard auto wrap mode.
2308.06394#54
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
55
Binary Classification Training Logs In this experiment, the classifier predicts a sentence into one of two classes: Accurate and Inaccurate. Training logs are shown in Figures 11, 12 . All models are trained for 10 The image depicts a man riding a skateboard down a paved path in a park. He is wearing a black jacket and appears to be enjoying himself as he glides along the path. There are several trees visible in the background, providing a serene setting for the skateboarder's leisurely ride. In addition to the skateboarder, there are several other people scattered throughout the scene. Some are walking on the path, while others are sitting or standing near the trees. Overall, the image captures a peaceful moment in the park where people are enjoying the outdoors and engaging in various activities. The image features a large white van parked inside a garage or workshop. The van is covered in newspapers and appears to be undergoing some sort of refurbishment or repainting process. There are several newspaper clippings on the van's surface, giving it a unique appearance. Inside the garage, there is a person working on the van, possibly involved in the refurbishment or repainting process. Additionally, there is another person standing near the van, observing the work being done. Figure 7: Example Annotations
2308.06394#55
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
56
Figure 7: Example Annotations Inaccurate Accurate Analysis Unsure Open attachment link in new window Image: COCO_val2014_000000516856.jpg Question: <image> Compose a comprehensive description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any individuals. In addition, depict the activities or interactions happening within the scene and provide an overall analysis of the composition, atmosphere, and narrative conveyed. Response 1: st The image features a young man sitting at a dining table, eating a sandwich. He is wearing a gray shirt and appears to be enjoying his meal. The table is cluttered with various objects, including plates, bowls, cups, and bottles. Accurate Analysis There are also candles on the table, adding a warm and cozy atmosphere to the scene. Instructions I Shortcuts / SUBMIT opt | A | Figure 8: Scale AI RAPID Portal used for annotation.
2308.06394#56
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
57
Figure 8: Scale AI RAPID Portal used for annotation. Type Method Sentence-level Prediction Accuracy F1 Score Accuracy F1 Score Binary Binary Binary Binary Finetune Decoder FT-Decoder 3 layers FT-Decoder 1 layer Finetune Qformer 79.2 78.5 78.41 79.22 78.37 76.91 77.4 78.22 83.92 83.73 83.08 83.41 83.22 82.61 81.65 81.61 Ternary Finetune Decoder Ternary FT-Decoder 3 layers Ternary FT-Decoder 1 layer Ternary Finetune Qformer 71.4 70.41 70.25 68.8 70.8 69.64 70.02 62.7 77.2 76.29 76.08 74.7 76.93 75.37 75.46 73.97 # Segment-level Prediction Table 3: Baseline Reward Model Results: We compare the development set performance of sentence-level and segment-level reward prediction models. Comparison is done with Accuracy and F-1 score across binary and ternary label granularities. Binary Class Distribution Temary Class Distribution Four Class Distribution 4000 4000 ° ° ° "ACCURATE THACCURATE “ACCURATE ACCURATE ANALYSIS ACCURATE RACCURATE ANALYSIS UNSURE
2308.06394#57
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
58
Datasize Full Dataset Half Dataset Quarter Dataset Accuracy 0.7489 0.7474 0.7375 F1 Score 0.7414 0.7387 0.7144 Figure 9: Class-wise Label Distribution Table 4: Dataset Scaling: Increasing the dataset size for re- ward model training gives a performance boost as size in- creases from a quarter to half but saturates thereafter. epochs. Fine-tuning the entire decoder model (orange) leads to over-fitting as compared to fine-tuning only last few layers of the decoder. Freezing the entire decoder and fine-tuning only the reward model head has the lowest performance. by the authors. Question Prompts are passed to InstructBLIP (Dai et al. 2023) to sample responses. Ternary Classification Training Logs The training curves and evaluation metrics for training ternary reward model classifiers is shown in Figures 14, 15. In this experiment, the classifier predicts a sentence into one of three classes: Accurate, Inaccurate, and Analysis.
2308.06394#58
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
59
Data Scaling Analysis To study the effects of data scaling on the performance of our reward model, we ablate the amount of training data used, comparing the differences in validation accuracy and F-1. We perform this analysis on the reward model that fine-tunes last 3 layers of the InstructBLIP’s LM decoder. Table 4 shows that as the dataset size for reward model training is gradually increased from a quarter to half, the performance of the model experiences over 2 percent increase in it’s F1-Score. However, beyond the half dataset size, further increments in data do not lead to substantial performance improvements and begins to saturate. Question prompts for dataset generation Figure 16 lists the description generation-related question prompts that we use for generating data. We generate data with two sets of questions. The first set is derived from the data generation method used in LLaVA dataset (Liu et al. 2023b). The second set is a custom list of questions drafted Accurate Inaccurate Analysis 20000 7 40000 35000 30000 15000 4 30000 25000 10000 +4 20000 azov0d 15000 5000 7 10000 10000 5000 Lo) Le) i?) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 08 LO
2308.06394#59
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
61
eval/accuracy eval/fl_score — finetune_qformer_sentence — finetune_gformer_sentence ™ partial_finetune_!_layer_sentence ™ partial_finetune_1_layer_sentence — decoder_finetune_sentence — decoder_finetune_sentence ™ partial_finetune_3_layer_sentence ™ partial_finetune_3_layer_sentence 0.79 0.78 0.78 0.775 0.77 0.77 0.765 0.76 0.76 _ train/global_step 0.755 train/global_step 200 400 600 3800 1k 200 400 600 300 1k eval/loss train/loss — finetune_qformer_sentence — finetune_aformer_sentence = partial_finetune_t_layer_sentence = partial_finetune_1_layer_sentence — decoder_finetune_sentence — decoder_finetune_sentence —™ partial_finetune_3_layer_sentence — partial_finetune_3_layer_sentence LS o6 0.5 1 0.4 0.3 0.5 0.2 0.1 train/global_step 0 0 200 400 600 800 1k 200 400 600 800 1k
2308.06394#61
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
63
eval/accuracy eval/fil_score —™ partial_decoder_1l_layer_segment_4k Segment_Tokenizea ™ partial_decoder_1_layer_segment_4k Segment_Tokenizea — partial_finetune_decoder_3_layers_segment_4k Segment — partial_finetune_decoder_3_layers_segment_4k Segment — finetune_decoder_segment_4k Segment_Tokenized — finetune_decoder_segment_4k Segment_Tokenized — finetune aformer segment 4k Segment Tokenized — finetune aformer segment 4k Segment Tokenized 0.83 0.835 0.83 0.82 0.825 0.81 0.82 0.815 0.8 train/global_step train/global_step 200 400 600 800 1k 200 400 600 800 1k train/loss eval/loss ™ partial_decoder_1_layer_segment_4k Segment_Tokenized — partial_decoder_1_layer_segment_4k Segment_Tokenizea —™ partial_finetune_decoder_3_layers_segment_4k Segment. —™
2308.06394#63
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
64
Segment_Tokenizea —™ partial_finetune_decoder_3_layers_segment_4k Segment. —™ partial_finetune_decoder_3_layers_segment_4k Segment — finetune_decoder_segment_4k Segment_Tokenized — finetune_decoder_segment_4k Segment_Tokenized — finetune aformer seement 4k Seement Tokenized — finetune aformer segment 4k Segment Tokenized 1,2 0.6 1 08 04 0.6 0.4 0.2 0.2 train/globatcstep train/global_step 0 i) 200 400 600 800 1k 200 400 600 800 1k
2308.06394#64
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
65
Figure 12: Binary Classification: Segment-level model Train- ing and Evaluation loss, Evaluation F-1 Score and Accuracy. True label 1400 ACCURATE 1200 1000 INACCURATE 800 600 ANALYSIS 400 UNSURE 200 0 ACCURATE INACCURATE ANALYSIS UNSURE Predicted label Confusion Matrix for Researcher-Annotator Agreement ACCURATE 1200 1000 INACCURATE 800 600 ANALYSIS 400 UNSURE 200 0 ACCURATE INACCURATE ANALYSIS UNSURE Confusion Matrix for Inter-Researcher Agreement Figure 13: Confusion Matrix for class-wise researcher agreement scores for the M-Haldetect dataset’s annotation task.
2308.06394#65
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
66
eval/accuracy — ternary_sent_finetune_qformer Ternary Sentence — ternary_sent_finetune_decoder Ternary Sentence —™ ternary_sent_partial_ft_t eval/fl_score — ternary_sent_finetune_qformer Ternary Sentence — ternary_sent_finetune_decoder Ternary Sentence layer Ternary Sentence —™ ternary_sent_partial_ft_l_layer Ternary Sentence — ternary sent partial ft 3 lavers Ternary Sentence — ternarv sent oartial ft 3 lavers Ternarv Sentence 200 400 600 800 1k train/loss — ternary_sent_finetune_qformer Ternary Sentence — ternary_sent_finetune_decoder Ternary Sentence ™ ternary_sent_partial_ft_!_layer Ternary Sentence ~ ft 3 lavers Ternarv Sentence eval/loss — ternary_sent_finetune_qformer Ternary Sentence — ternary_sent_finetune_decoder Ternary Sentence ™ ternary_sent_partial_ft_l_layer Ternary Sentence ternary sent vartial — ternary sent oartial ft 3 lavers Ternarv Sentence 1
2308.06394#66
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
68
eval/accuracy egment_partial_ft_3_layers Ternary Segment ment_partia Ternary Segment ent_finetune_decoder Ternary — ternary segment fnetune aformer Ternary Segment eval/fl_score ™ ternary_segment_partial_ft_3_layers Ternary Segm it_partial_ft_t_ ent_finetune_dec ™ ternary t_t_layer train/global_step 200 400 600 800 1k 200 train/loss nt_partial_ft_3_layers Ternary ment_partial_ft_l_layer Ternary Seg segment finetune_decoder 7 — ternary sezment fnetune aformer Ternary Segment — ternary tern globat-ste train/global_step 0 - ( 200 400 600 800 1k 200 400 Figure 14: Ternary Classification: Sentence-level model Training and Evaluation loss, Evaluation F-1 Score and Ac- curacy. Figure 15: Ternary Classification: Segment-level model Train- ing and Evaluation loss, Evaluation F-1 Score and Accuracy.
2308.06394#68
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
69
LLaVA based question prompts ‘Can you describe the main features of this image for me? <image>" ‘Explain the visual content of the image in great detail. <image>"' ‘<image> Describe the following image." “Analyze the image in a comprehensive and detailed manner. <image>' ‘<image> What are the key elements in this picture?" "What do you see happening in this image? <image>' "<image> What's happening in the scene?" "What are the key elements in this picture? <image>' ‘Write a detailed description of the given image. <image>' '<image> Explain the visual content of the image in great detail.’ ‘Can you elaborate on the elements of the picture provided? <image>' "What is this photo about'? <image>" "Describe the following image. <image>' ‘<image> Analyze the image in a comprehensive and detailed manner.’ ‘<image> What do you see happening in this image?" ‘<image> Write a detailed description of the given image.'
2308.06394#69
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
70
manner.’ ‘<image> What do you see happening in this image?" ‘<image> Write a detailed description of the given image.' ‘<image> Can you elaborate on the elements of the picture provided?’ ‘<image> What do you think is going on in this snapshot?" ‘<image> Can you describe the main features of this image for me?’ "What do you think is going on in this snapshot’? <image>' "<image> What is this photo about'?" “What's happening in the scene?” <image>" Custom question prompts “<image> Please provide a detailed description of the image. Describe the visual elements, colors, shapes, textures, and any objects or people present along with the overall mood or atmosphere portrayed in the image.’ ‘<image> Please provide a detailed description of the image, including its visual elements, such as colors, shapes, textures, objects, and people." ‘<image> Provide an intricate description of the image, capturing its visual elements, including colors, shapes, textures, objects, and
2308.06394#70
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
71
people." ‘<image> Provide an intricate description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any people present.’ ‘<image> Delve into the details of the image and compose a comprehensive description, incorporating its visual aspects like colors, shapes, textures, objects, and individuals.’ '<image> Craft an elaborate depiction of the image, highlighting its visual components such as colors, shapes, textures, objects, and the presence of any individuals.’ ‘<image> Compose a detailed account of the image, encompassing its visual characteristics, like colors, shapes, textures, objects, and any human subjects, by paying careful attention to the specifics." ‘<image> Compose a comprehensive description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any individuals. In addition, depict the activities or interactions happening within the scene and provide an overall analysis of the composition, atmosphere, and narrative conveyed." "Please provide a detailed description of the image. Describe the visual
2308.06394#71
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
72
and provide an overall analysis of the composition, atmosphere, and narrative conveyed." "Please provide a detailed description of the image. Describe the visual elements, colors, shapes, textures, and any objects or people present along with the overall mood or atmosphere portrayed in the image. <image>' Please provide a detailed description of the image, including its visual elements, such as colors, shapes, textures, objects, and people. <image>' ‘Provide an intricate description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any people present. <image>' ‘Delve into the details of the image and compose a comprehensive description, incorporating its visual aspects like colors, shapes, textures, objects, and individuals. <image>' ‘Craft an elaborate depiction of the image, highlighting its visual components such as colors, shapes, textures, objects, and the presence of any individuals. <image>' ‘Compose a detailed account of the image, encompassing its visual characteristics, like colors, shapes, textures, objects, and any human subjects, by
2308.06394#72
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.06394
73
‘Compose a detailed account of the image, encompassing its visual characteristics, like colors, shapes, textures, objects, and any human subjects, by paying careful attention to the specifics. <image>' ‘Compose a comprehensive description of the image, capturing its visual elements, including colors, shapes, textures, objects, and any individuals. In addition, depict the activities or interactions happening within the scene and provide an overall analysis of the composition, atmosphere, and narrative conveyed. <image>' 'Give a detailed description of the image. <image>Write a detailed description of the given image. <image>'
2308.06394#73
Detecting and Preventing Hallucinations in Large Vision Language Models
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA). However, generating detailed responses that are visually grounded is still a challenging task for these models. We find that even the current state-of-the-art LVLMs (InstructBLIP) still contain a staggering 30 percent of the hallucinatory text in the form of non-existent objects, unfaithful descriptions, and inaccurate relationships. To address this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion Dataset that can be used to train and benchmark models for hallucination detection and prevention. M-HalDetect consists of 16k fine-grained annotations on VQA examples, making it the first comprehensive multi-modal hallucination detection dataset for detailed image descriptions. Unlike previous work that only consider object hallucination, we additionally annotate both entity descriptions and relationships that are unfaithful. To demonstrate the potential of this dataset for hallucination prevention, we optimize InstructBLIP through our novel Fine-grained Direct Preference Optimization (FDPO). We also train fine-grained multi-modal reward models from InstructBLIP and evaluate their effectiveness with best-of-n rejection sampling. We perform human evaluation on both FDPO and rejection sampling, and find that they reduce hallucination rates in InstructBLIP by 41% and 55% respectively. We also find that our reward model generalizes to other multi-modal models, reducing hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has strong correlation with human evaluated accuracy scores.
http://arxiv.org/pdf/2308.06394
Anisha Gunjal, Jihan Yin, Erhan Bas
cs.CV, cs.LG
preprint
null
cs.CV
20230811
20230818
[ { "id": "2302.04023" }, { "id": "2305.17926" }, { "id": "2307.04964" }, { "id": "2305.20050" }, { "id": "2306.14895" }, { "id": "1803.01937" }, { "id": "2305.18290" }, { "id": "2204.05862" }, { "id": "2306.14565" }, { "id": "2305.06500" }, { "id": "2306.01693" }, { "id": "2304.08485" }, { "id": "2305.10355" } ]
2308.05696
0
3 2 0 2 g u A 0 1 ] L C . s c [ 1 v 6 9 6 5 0 . 8 0 3 2 : v i X r a # A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment Yingxiu Zhao1, Bowen Yu2∗, Binyuan Hui2, Haiyang Yu2, Fei Huang2, Yongbin Li2∗, Nevin L. Zhang1 1 The Hong Kong University of Science and Technology, 2 Alibaba Group {yzhaocx,lzhang}@connect.ust.hk, {yubowen.ybw,binyuan.hby,yifei.yhy,f.huang,shuide.lyb}@alibaba-inc.com # Abstract
2308.05696#0
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
1
Zhiyuan Liu Tsinghua University Beijing, China [email protected] # ABSTRACT Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data avail- ability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., mil- lions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can con- tinuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (𝑖) database maintenance knowledge detection from doc- uments and tools, (𝑖𝑖) tree of thought reasoning for root cause analysis, and (𝑖𝑖𝑖) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
2308.05481#1
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
1
# Abstract Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose tree-instruct to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key2. # Introduction
2308.05696#1
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05696
2
# Introduction The latest generation of large language models (LLMs) has attracted significant attention due to their immense potential in language technologies [26, 37, 44, 20]. To enhance interactive user requests and chat interfaces, these models undergo instruction-tuning using supervised input-output pairs [16, 17, 10]. This process enables the model to comprehend the required style and format for effective user interaction, showcasing the knowledge and capabilities gained during pre-training [28]. Consequently, the efficacy of instruction data significantly influences LLMs’ abilities, shaping users’ perceptions of their capabilities [43, 19, 9]. Recently, LIMA has demonstrated that with just 1000 carefully curated prompts and responses, an LLM can achieve remarkably strong performance [48]. This suggests that the scaling laws of instruction tuning are not solely dependent on data quantity but rather influenced by prompt diversity and quality. However, one critical and less-explored aspect ∗Correspondence to: Bowen Yu <[email protected]>, Yongbin Li <shuide.lyb@alibaba- inc.com>. 2The data and code of this work are available at https://github.com/AlibabaResearch/ DAMO-ConvAI/tree/main/tree-instruct Preprint. Under review.
2308.05696#2
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
3
1 INTRODUCTION Limitations of DBAs. Currently, most companies still rely on DBAs for database maintenance (DM, e.g., tuning, configuring, diagnosing, optimizing) to ensure high performance, availability and reliability of the databases. However, there is a significant gap between DBAs and DM tasks. First, it takes a long time to train a DBA. There are numerous relevant documents (e.g., administrator guides), which can span over 10,000 pages for just one database product and consumes DBAs several years to partially grasp the skills by applying in real practice. Second, it is hard to obtain enough DBAs to manage a large number of database instances, e.g. millions of instance on cloud databases. Third, a DBA may not provide in-time response in emergent cases (especially for correlated issues across multiple database modules) and cause great financial losses. Limitations of Database Tools. Many database products are equipped with semi-automatic maintenance tools to relieve the pressure of human DBAs [5, 6, 10–12]. However, they have sev- eral limitations. First, they are built by empirical rules [4, 24] or small-scale ML models (e.g., classifiers [13]), which have poor text processing capability
2308.05481#3
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
3
Preprint. Under review. 70; g yy Tree 151 Fs helpful_base $500 @ alpaca = | —®~ self-instruction td % — Wizard-LM rm] 7 60). asst —+— koala t 50) 40) —) 30 a 20 23.05 3 é 70 10 3 é i0 Added Nodes Number Added Nodes Number Figure 1: The scaling law of instruction complexity. We experiment with enhancing the complexity of semantic trees for 1,000 Alpaca instructions by adding extra 3, 6, and 10 nodes. We then evaluate models fine-tuned on instruction data of varying complexities against text-davinci003 in terms of win rate on AlpacaEval (Left). Additionally, we examine win rates on different subsets of AlpacaEval (Right). In the left figure, we indicate the average token count for instructions of different complexity levels. We also use WizardLM’s in-depth deepening as the baseline.
2308.05696#3
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
4
First, they are built by empirical rules [4, 24] or small-scale ML models (e.g., classifiers [13]), which have poor text processing capability and cannot utilize available documents to answer basic questions. Second, they cannot flexibly generalize to scenario changes. For empirical methods, it is tedious to man- ually update rules by newest versions of documents. And learned methods require costly model retraining and are not suitable for online maintenance. Third, they cannot reason the root cause of an anomaly like DBAs, such as looking up more system views based
2308.05481#4
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
4
of evaluating instruction data is complexity. There are at least three unanswered questions related to complexity: (1) Scaling law of complexity: Intuitively, more complex instruction data might elicit more potential capabilities in LLMs to address intricate problems [23, 25]. WizardLM [45] introduce in-depth and in-breadth evolving methods to rewrite prompts into more complex and diverse versions, resulting in a 12.4% increase in LLMs’ win rate with the same amount of data. Yet, whether WizardLM’s performance improvement is due to complexity or merely derived from diversity remains uncertain. Moreover, the ongoing enhancements in complexity are yet to be explored. (2) Relationship between complexity-induced performance improvement and token quantity: Enhancing instance complexity inevitably increases the number of tokens per instance [11]. While WizardLM exhibits performance improvements with the same instance quantity, it increases the number of tokens per instance. This raises the question of whether complexity-induced improvement in LLMs results from increased training tokens. As known, enlarging LLMs’ pretraining token counts can lead to better performance [24, 36]. (3) Effectiveness of
2308.05696#4
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
5
on the initial analysis results. This capability is vital to detect useful information in complex cases. Our Vision: A Human-Beyond Database Adminstrator. To this end, we aim to build a human-beyond “DBA” that can tirelessly learn from documents (see Figure 1), which, given a set of documents, automatically (1) learns experience from documents, (2) obtains status metrics by interacting with the database, (3) reasons about possible root causes with the abnormal metrics, and (4) accordingly gives optimization advice by calling proper tools. Challenges. Recent advances in Large Language Models (LLMs) have demonstrated superiority in understanding natural language, generating basic codes, and using external tools. However, leverag- ing LLM to design a “human-beyond DBA” is still challenging. (1) Experience learning from documents. Just like human learners taking notes in classes, although LLMs have undergone training on vast corpus, important knowledge points (e.g., diagnosis experience) cannot be easily utilized without careful attention. However, most texts are of long documents (with varying input lengths and section correlations) and different formats of the extracted experience can greatly affect the utilization capability of the LLM.
2308.05481#5
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
5
results from increased training tokens. As known, enlarging LLMs’ pretraining token counts can lead to better performance [24, 36]. (3) Effectiveness of complexity-based curriculum instruction learning: Curriculum learning is a strategy in machine learning that starts with easy instances and gradually introduces harder ones [4]. Its effectiveness has been demonstrated in various NLP tasks like machine translation [49], dialogue [50], and question answering [31]. However, its potential efficacy in instruction tuning is under-explored.
2308.05696#5
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
6
(2) Reasoning by interacting with database. With the extracted experience, we need to inspire LLM to reason about the given anomalies. Different from basic prompt design in machine learning, database diagnosis is an interactive procedure with the database (e.g., looking up system views or metrics). However, LLM responses are often untrustworthy (“hallucination” problem), and it is critical to design strategies that guide LLM to utilize proper interfaces of the database and derive reasonable analysis. xxxx (3) Mechanism for communication across multiple LLMs. Similar to human beings, one LLM alone may be stuck in sub-optimal
2308.05481#6
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
6
However, to answer the aforementioned questions, the key hurdle lies in finding a controlled way to increase the complexity of instruction data without introducing unwanted factors such as diversity. WizardLM [45] employs an in-depth evolving prompt like “Your objective is to rewrite a given prompt into a more complex version to make ChatGPT and GPT4 a bit harder to handle.” to complicate the existing instructions. Unfortunately, although intended to enhance complexity, this approach might inadvertently introduce diversity by diverting from the initial instruction objectives. This issue becomes particularly severe when repeatedly employing in-depth evolving to achieve varying levels of complexity. We study and analyze the instructions before and after in-depth evolving in Sec. 4.1. As illustrated in Fig. 2, the iteratively evolved instructions append additional objectives that deviate from the original instructions, showcasing a greater diversity. To address this concern, we propose Tree-Instruct, which involves prompting LLMs to add a specific number of new nodes to the semantic tree of an existing instruction, as opposed to manipulating the text sequence directly, as done in Self-Instruct [41] or WizardLM [45]. We use the number of added nodes to represent the introduced level of complexity. The advantage of this approach lies in the fact that semantic tree nodes lack any sequential order [32]. By enforcing LLMs to operate on the 2
2308.05696#6
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
7
solutions, and it is vital to derive a framework where multiple LLMs collaborate to tackle complex database problems. By pooling their collective intelligence, these LLMs can provide comprehensive and smart solutions that a single LLM or even skilled human DBA would struggle to think out. Idea of LLM as DBA. Based on above observations, we introduce D-Bot, an LLM based database administrator. First, D-Bot trans- forms documents into experiential knowledge by dividing them into manageable chunks and summarizing them for further extraction of maintenance insights with LLM. Second, it iteratively generates and assesses different formats of task descriptions to assist LLM in understanding the maintenance tasks better. Third, D-Bot utilizes external tools by employing matching algorithms to select appro- priate tools and providing LLM with instructions on how to use the APIs of selected tools. Once equipped with the experience, tools, and input prompt, LLM can detect anomalies, analyze root causes, and provide suggestions, following a tree of thought strategy to re- vert to previous steps if a failure occurs. Moreover, D-Bot promotes collaborative diagnosis by allowing multiple LLMs to communicate
2308.05481#7
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
7
2 semantic tree, this process becomes analogous to inserting new words into the middle of the original instructions. This compels the models to complicate while adhering to the structural constraints of the initial instruction rather than merely appending new instructions. It can significantly mitigate the issue of straying from the primary theme of the initial instruction. We leverage GPT-4 to assess the consistency of evolved instructions with original ones, and the results verify that Tree-Instruct improves WizardLM’s consistency score from 0.56 to 0.69. Fig. 1 highlights how the number of added nodes raises the complexity level of the samples. With the help of Tree-Instruct, we have obtained the following preliminary experimental conclusions:
2308.05696#7
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
8
of thought strategy to re- vert to previous steps if a failure occurs. Moreover, D-Bot promotes collaborative diagnosis by allowing multiple LLMs to communicate based on predefined environmental settings, inspiring more robust solutions via debate-like communications. Contributions. We make the following contributions. (1) We design a LLM-centric database maintenance framework, and explore potential to overcome limitations of traditional strategies. (2) We propose an effective data collection mechanism by (𝑖) de- tecting experiential knowledge from documents and (𝑖𝑖) leveraging external tools with matching algorithms. (3) We propose a root cause analysis method that utilizes LLM and tree search algorithm for accurate diagnosis. (4) We propose an innovative concept of collaborative diagnosis among LLMs, thereby offering more comprehensive and robust solutions to complex database problems. (5) Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes.
2308.05481#8
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
8
With the help of Tree-Instruct, we have obtained the following preliminary experimental conclusions: (1) As the complexity of the instruction data increases, the benefits of instruction tuning continue to grow: Following LIMA, we attempt instruction tuning using 1,000 samples from Alpaca-GPT-4 as a base. We add 3, 6, and 10 nodes to the semantic tree of each sample, resulting in performance gains of 14%, 18%, and 24%, respectively, across eight sub-skills such as commonsense, writing, and coding, showing consistent improvements. Furthermore, this scaling law can be extended to more complex instruction data. For instance, when fine-tuning 6,000 conversations filtered from ShareGPT via OpenChat[38] (showing excellent performance in the open-source LLMs), we observe that by increasing the complexity to around 3,000 users’ instructions, the winning rate increases from 80.87% to 82% on the AlpacaEval leaderboard3.
2308.05696#8
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
9
# 2 PRELIMINARIES Database Anomalies. In databases, there are five common prob- lems that can negatively affect the normal execution status. (1) Running Slow. The database exhibits longer response time than expectancy, leading to bad execution performance. (2) Full Disk Capacity. The database’s disk space is exhausted, preventing it from storing new data. (3) Execution Errors. The database experiences errors, potentially due to improper error handling in the application (e.g., leaking sensitive data or system details) or issues within data- base (e.g., improper data types). (4) Hanging. The database becomes unresponsive, which is usually caused by long-running queries, deadlocks, or resource contention. (5) Crashing. The database un- expectedly shuts down, causing data inaccessible. For a mature database product, each anomaly type is explained in the documenta- tion and suitable to be learned by LLMs. Observation Tools for Anomaly Detection. “Observability of the database” is vital to detect above anomalies, including logs, met- rics, and traces. (1) Logs are records of database events. For example, PostgresSQL supports slow query logs (with error messages that can help debug and solve execution issues), but these logs may 2
2308.05481#9
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
9
(2) The increase in complexity partly comes from additional tokens, but a few complex instruc- tions outperform diverse yet simple instructions, under the same token budget.: We find that as the complexity increases, the number of tokens also increases. Adding 10 nodes in the tree increases the average token length of samples from 186 to 607. Hence, to make a fair comparison, we increase the number of original instructions from 1,000 to 4,000 to match the total token quantity of our tree-instructed samples. Under this setting, the performance gain from adding 10 nodes still achieves more than 20%. This indicates that the improvement due to complexity is partly attributed to the increased tokens, but increasing the complexity of samples is equivalent to the diversity achieved by four times the token count of simple samples. Moreover, when considering the same token count, instructions evolved from Tree-Instruct exhibit a 5% higher win rate compared to in-depth deepening of WizardLM, making it a more effective method for increasing complexity.
2308.05696#9
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
10
record a large scale of data and are generally not enabled in on- line stage. (2) Metrics capture the aggregated database and system statistics. For example, views like pg_stat_statements record the templates and statistics of slow queries; tools like Prometheus [20] provide numerous monitoring metrics, making it possible to cap- ture the real time system status. (3) Traces provide visibility into how requests behave during executing in the database. Different from logs that help to identify the database problem, traces help to locate the specific abnormal workload or application. Optimization Tools for Anomaly Solving. Users mainly con- cern how to restore to the normal status after an anomaly occurs. Here we showcase some optimization tools. (1) For slow queries, since most open-source databases are weak in logical transfor- mation, there are external engines (e.g., Calcite with ∼120 query rewrite rules) and tuning guides (e.g., Oracle with over 34 trans- formation suggestions) that help to optimize slow queries. (2) For knob tuning, many failures (e.g., max_connections in Postgres) or bad performance (e.g., memory management knobs) are cor- related with
2308.05481#10
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
10
(3) Curriculum instruction tuning may not be effective; increasing complexity is all you need: We try curriculum learning by gradually training samples on harder samples, i.e., first train on data with added three nodes, then six nodes, and finally ten nodes. We observe that, with the same training steps, the curriculum learning approach does outperform training with a mixed difficulty of samples but still falls short compared to directly training with the added ten-nodes samples. This indicates that when we have more complex samples, the significance of simpler samples diminishes significantly, suggesting that repeating training with complex samples may be sufficient. # 2 Related Work Large Language Models (LLMs), trained on extensive textual datasets, have risen as premier solutions for a diverse array of NLP tasks [47]. Despite their remarkable performance, these models are not without their limitations. These limitations encompass potential misunderstandings of human instructions, the propensity to generate biased content, and the sporadic generation of hallucinated information. Consequently, bringing LLMs in line with human expectations has become a central focal point within the research community [3, 34].
2308.05696#10
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
11
For knob tuning, many failures (e.g., max_connections in Postgres) or bad performance (e.g., memory management knobs) are cor- related with database knobs (e.g., for a slow workload, incresae innodb_buffer_pool_size in MySQL by 5% if the memory usage is lower than 60%). Similarly, there are index tuning rules that generate potentially useful indexes (e.g., taking columns within the same predicate as a composite index). Besides, we can utilize more advanced methods, such as selecting among heuristic meth- ods [3, 21, 22] and learned methods [7–9, 15, 23, 25, 26] for problems like index lacking, which is not within the scope of this paper.
2308.05481#11
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
11
To attain this alignment, researchers need to amass high-quality instructional data that authentically mirrors human needs and expectations. A rational starting point for data collection involves the adaptation of existing NLP benchmarks into natural language instructions, like PromptSource [2], SuperNaturalInstruction [42], Unnatural Instructions [15] and FLAN [21] are spearheading this strategy. These benchmarks encompass a wide range of NLP tasks, spanning dialogue, reasoning, and coding, all unified under the realm of language instructions. TÜLU[40] showcases that instructions from NLP tasks significantly bolster the reasoning prowess of aligned LLMs, where the diversity of tasks plays a pivotal role in shaping the capabilities of LLMs. Nevertheless, a notable trend in NLP datasets is their propensity to emphasize particular skills, consequently yielding instructions that possess a somewhat confined scope. This constraint has # 3https://tatsu-lab.github.io/alpaca_eval/ 3 # Initial instruction: Implementing effective strategies to curb environmental pollutants in the atmosphere. NN strategies # Tree-10-nodes instruction: Implement effective strategies to curb environmental pollutants in the atmosphere at different altitudes by reducing emissions from industrial sources like factories and vehicles. Additionally, monitor these emissions using specialized equipment and stringently enforce regulations to ensure industries adhere to best practices and environmental standards.
2308.05696#11
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
12
We aim to design D-Bot, an LLM-based DBA, for automatically diagnosing the database anomalies and use LLM to directly (or call appropriate tools to indirectly) provide the root causes. # 3 THE VISON OF D-BOT Existing LLMs are criticized for problems like “Brain in a Vat” [14]. Thus, it is essential to establish close connections between LLMs and the target database, allowing us to guide LLMs in effectively maintaining the database’s health and functionality. Hence, we propose D-Bot, which is composed of two stages. First, in preparation stage, D-Bot generates experience (from documents) and prompt template (from diagnosis samples), which are vital to guide online maintenance. • Documents → Experience. Given a large volume of diverse, long, unstructured database documents (e.g., database man- ual, white paper, blogs), we first split each document into chunks that can be processed by the LLM. To aggregate correlated chunks together (e.g., chunk 𝑣𝑖 that explains the meaning of “bloat-table” and chunk 𝑣 𝑗 that utilizes “bloat- table” in root cause analysis), we generate a summary for each chunk based on both its content and its subsections. Finally, we utilize LLM to extract maintenance experience from chunks with similar summaries (Section 4).
2308.05481#12
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
12
{imptemning ( effective =) ———— (rena) ¢ ab) ) Cm) wien) — ete a Oo NN egulatior Ne (—) a Industrial NNS sources if # WizardLM Deepening Evolve-iteration-3: Investigating and formulating intricate methodologies, deeply anchored in cutting-edge quantum and classical scientific principles, to systematically and holistically reduce, monitor, and assess both primary and secondary atmospheric environmental pollutants. This approach is crucial for ensuring sustainable socio-economic progress while actively safeguarding and nurturing our planet's delicate ecological balance. Figure 2: The instruction generated by different evolving methods: Tree-instruction after adding ten nodes and WizardLM by iteratively deepening three times. We also demonstrate how Tree- Instruct enhances the complexity of the original instruction’s semantic tree by introducing three nodes (orange), six nodes (green), and ten nodes (purple). 4
2308.05696#12
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
13
• Prompt Template Generation. To help LLM better under- stand the DM tasks, we iteratively generate and score dif- ferent formats of task descriptions using DM samples (i.e., given the anomaly and solutions, ask LLM to describe the task), and adopt task description that both scores high perfor- mance and is sensible to human DBAs (in cases of learning bias) for LLM diagnosis (Section 5). Document Documents > Experience Document i . (1) Overat Performance Analysis, : Chunk — Extraction [11] Problem Description ‘Summary Problem Symptoms refer CPU Usage Views (S)view anaivsis 2.2}cPU Usage Views Segmentation External Tool Learning APIS (db, pg_activities} [dbmind, rewrite] | Database & Platform Website Tool Retrieval Query & Answer ® Prompt Template Generation Prompt Generation Prompt Scoring : Collaborative Diagnosis Task Description ‘Score, Role Chief ie teresa drtahane performance ‘ problem. Dlagnosethe caumer | OO Allocation Yi" bea and give optimization suggestions. Ina company whose databases meet anomalies. It depends on youtto diagnose the root causes. “sae Y O WS Environment Setting {order: sequential, visibility: all} Chat Summary Summarize history chats;Outline tool inputs, results, goals. Figure 2: Overview of D-Bot
2308.05481#13
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
13
4 the potential to impede their capacity to meet the intricate requirements of real-world applications. In order to tackle these challenges, one possible approach is to formulate instructions via purpose- ful human annotations. An exemplary precursor to such a corpus is OpenAssistant [19], which comprises over 10k dialogues involving the participation of 13k annotators from around the world. Another remarkable venture into harnessing human-generated instructions through crowd-sourcing is ShareGPT 4. This platform encourages users to contribute and exchange their engaging conversations with ChatGPT and GPT4.
2308.05696#13
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
14
Figure 2: Overview of D-Bot Second, in maintenance stage, given an anomaly, D-Bot iter- atively reasons the possible root causes by taking advantages of external tools and multi-LLM communications. • External Tool Learning. For a given anomaly, D-Bot first matches relevant tools using algorithms like Dense Retrieval. Next, D-Bot provides the tool APIs together with their de- scriptions to the LLM (e.g., function calls in GPT-4). After that, LLM can utilize these APIs to obtain metric values or op- timization solutions. For example, in PostgresSQL, LLM can acquire the templates of slowest queries in the pg_activity view. If these queries consume much CPU resource (e.g., over 80%), they could be root causes and optimized with rewriting tool (Section 6). cause can affect the database performance (e.g., the performance hazards of many dead tuples); “metrics” provide hints of matching with this experience segment, i.e., LLM will utilize this experience if the abnormal metrics exist in the “metrics” field; “steps” provide the detailed procedure of checking whether the root cause exists by interacting with database (e.g., obtaining the ratio of dead tuples and live tuples from table statistics views).
2308.05481#14
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
14
While human annotation ensures both quality and diversity, it becomes challenging to ensure the quantity and complexity of instructional data due to the highly expensive annotation process [7], and the distribution of difficulty levels in human-created instructions tends to skew towards being either easy [23]. To address this issue, Self-Instruct [41] leverages ChatGPT’s in-context learning capability to generate a large volume of instructions from a predefined set of human-annotated instructions spanning diverse topics and task types. Building upon this foundation, LIMA [48] and Alpagasus [5] separately validate the significant impact of data diversity and quality on instructional effectiveness. The selection of thousands of high-quality and diverse instructional examples proves more advantageous in achieving better results compared to using the entire dataset. Further increasing the number of instructions could potentially induce a semantic shift in the LLMs [1]. Up to this point, three key metrics within the instructional data—diversity, quality, and quantity—have been elucidated for their impact on tuning, though exploration into complexity remains insufficient. While WizardLM [45] demonstrates that evolving both the complexity and
2308.05696#14
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
15
1 "name": "many_dead_tuples", 2 "content": "If the accessed table has too many dead tuples, it can cause bloat-table and degrade performance", 3 "metrics": ["live_tuples", "dead_tuples", "table_size", " dead_rate"], • LLM Diagnosis. Although LLM can understand the func- tions of tool APIs, it still may generate incorrect API requests, leading to diagnosis failures. To solve this problem, we em- ploy the tree of thought strategy, where LLM can go back to previous steps if the current step fails. It significantly increases the likelihood of LLMs arriving at reasonable di- agnosis results (Section 7). • Collaborative Diagnosis. A single LLM may execute only the initial diagnosis steps and end up early, leaving the prob- lem inadequately resolved. To address this limitation, we propose the use of multiple LLMs working collaboratively. Each LLM plays a specific role and communicates by the en- vironment settings (e.g., priorities, speaking orders). In this way, we can enable LLMs to engage in debates and inspire more robust solutions (Section 8). # 4 EXPERIENCE DETECTION FROM DOCUMENTS
2308.05481#15
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
15
been elucidated for their impact on tuning, though exploration into complexity remains insufficient. While WizardLM [45] demonstrates that evolving both the complexity and diversity of instructions can lead to performance enhancement, it does not deeply investigate the individual importance of complexity. This paper introduces a method, Tree-Instruct, which enhances instructional complexity while simultaneously constraining thematic consistency to mitigate variations in diversity. Our experiments preliminarily establish a scaling law regarding complexity, show that the improvement resulting from increased complexity isn’t solely due to the introduction of more training tokens and illustrate that LLMs only require complex samples for instruction tuning, rather than simple samples serving as foundational padding for curriculum learning.
2308.05696#15
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
16
# 4 EXPERIENCE DETECTION FROM DOCUMENTS Document learning aims to extract experience segments from tex- tual sources, where the extracted segments are potentially useful in different DM cases. For instance, when analyzing the root causes of performance degradation, LLM utilizes the “many_dead_tuples” experience to decide whether dead tuples have negatively affected the efficiency of index lookup and scans. Desired Experience Format. To ensure LLM can efficiently uti- lize the experience, each experience fragment should include four fields. As shown in the following example, “name” helps LLM to understand the overall function; “content” explains how the root 4 "steps": "For each accessed table, if the total number of live tuples and dead tuples is within an acceptable limit (1000), and table size is not too big (50MB), it is not a root cause. Otherwise, if the dead rate also exceeds the threshold (0.02), it is considered a root cause. And we suggest to clean up dead tuples in time."
2308.05481#16
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
16
# 3 Tree-Instruct Enhancing the complexity of natural language text seems like a straightforward task for proficient LLMs. For instance, WizardLM utilizes a simple text prompt to complexify instructions as mentioned in Sec. 1. However, due to the extensive pre-training of LLMs on massive corpora, where models predict the next token based on the preceding context, we’ve noticed that LLMs can often exploit the given instruction by simply continuing the text beyond the initial prompt to artificially amplify complexity. While adding continuation constraints can enhance the complexity of instructions, it simultaneously leads them away from the core thematic focus. This divergence expands the topic and domain, fostering diversity that hinders our ability to solely assess the impact of increased instruction complexity. We leverage GPT-4 to automatically score the consistency (range in 0 ∼ 1) of the instructions before and after implementing in-depth deepening following WizardLM. We found that it only gets a 0.56 alignment score. Furthermore, upon iteratively enhancing the instruction’s complexity, the guidance might become ineffective, losing its original essence. For instance, it might cease to present a question, rendering it arduous for the LLM to generate a suitable response. This phenomenon matches with observations made by WizardLM, which prompts them to introduce the Elimination Evolving procedure.
2308.05696#16
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
17
LLM for Experience Detection. It aims to detect experience seg- ments that follow above format. Since different paragraphs within a long document may be correlated with each other (e.g., the concept of “bloat-table” appearing in “many_dead_tuples” is introduced in another section), we explain how to extract experience segments without losing the technical details. Step1: Segmentation. Instead of partitioning documents into fixed- length segments, we divide them based on the structure of the sec- tion structures and their content. Initially, the document is divided into chunks using the section separators. If a chunk exceeds the maximum chunk size (e.g., 1k tokens), we further divide it recur- sively into smaller chunks. Step2: Chunk Summary. Next, for each chunk denoted as 𝑥, a summary 𝑥 .𝑠𝑢𝑚𝑚𝑎𝑟𝑦 is created by feeding the content of 𝑥 into LLM with a summarization prompt 𝑝𝑠𝑢𝑚𝑚𝑎𝑟𝑖𝑧𝑒 :
2308.05481#17
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
17
To address this issue, we first consider what determines the complexity of natural language text. In linguistics and education, there is a lack of precise scientific consensus on determining the complexity of the text. No single source can precisely summarize a text’s complexity. Currently, a widely accepted perspective suggests that qualitative measures of text complexity require an informed judgment of text difficulty based on various factors. The standards use factors like purpose, levels of meaning, structure, language conventions, clarity, and knowledge demands to measure text difficulty 5. Among these, text structure is a more measurable indicator, as we can convert text sequences into tree structures using mature dependency or semantic tree parsers [33]. Tree structures, prevalent in natural language representations, offer structural insights reflecting human text comprehension [14]. # 42https://sharegpt.com/ 5https://www.generationready.com/wp-content/uploads/2021/04/ Beginners-Guide-to-Text-Complexity.pdf 5 Furthermore, we can gauge text complexity accurately by measuring the width and depth of trees, as a deeper and wider grammar tree signifies more intricate sentence structures [8, 39]. Inspired by the concept of tree complexity, we propose Tree-Instruct, wherein LLMs directly add a specific number of nodes to the semantic tree of an instruction. This increases the tree’s width and depth, thereby enhancing text structure complexity. In detail, Tree-Instruct encompasses three steps:
2308.05696#17
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
18
𝑝𝑠𝑢𝑚𝑚𝑎𝑟𝑖𝑧𝑒 = Summarize the provided chunk briefly · · · Your summary will serve as an index for others to find technical details related to database maintenance · · · Pay attention to examples even if the chunks covers other topics. The generated 𝑥 .𝑠𝑢𝑚𝑚𝑎𝑟𝑦 acts as a textual index of 𝑥, enabling the matching of chunks containing similar content. 3 Step3: Experience Extraction. Once the summaries of the chunks are generated, LLM parses the content of each chunk and compares it with the summaries of other chunks having similar content, which is guided by the extraction prompt 𝑝𝑒𝑥𝑡𝑟𝑎𝑐𝑡 . This way, experience segments that correlate with the key points from the summaries are detected. 𝑝𝑒𝑥𝑡𝑟𝑎𝑐𝑡 = Given a chunk summary, extract diagnosis experi- ence from the chunk. If uncertain, explore diagnosis experience in chunks with similar summaries. In our implementation, given a document, we use LLM to extract
2308.05481#18
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
18
Step 1: Tree Construction involves semantic parsing, where a structured representation is created from a natural language sentence. This process yields a tree structure for an instruction. For instance, given the instruction “Implementing effective strategies to curb environmental pollutants in the atmosphere”, we derive an original tree structure Tree-1 as shown in the first tree of Fig. 2. Step 2: Nodes Expansion operates on the acquired tree structure, expanding it in depth or width by adding new nodes, thus influencing the new tree’s complexity. We only add meaningful nodes representing nouns or verbs, since words like adjectives or prepositions contribute little to tree complexity. The second tree in Fig. 2 illustrates Tree-2 after adding ten nodes. Step 3: Tree Sentenceization aims to make LLMs revert the complex new tree structure (Tree-2) back to fluent natural language instruction by introducing connected words. Prompt for Tree-Instruct You are an instruction rewriter. You need to rewrite a given user instruction following Procedures step by step. You MUST ONLY return the NEW instruction you rewrite.
2308.05696#18
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
19
In our implementation, given a document, we use LLM to extract experience segments into the above 4-field format. Detected Maintenance Experience. In Figure 3, we showcase the simplified diagnosis procedure together with some necessary details, coming from chunks originally in different sections of the given documents (e.g., the maintenance guide with over 100 pages). 1. Background Understanding. It’s crucial to grasp the con- text of system performance, such as recent changes in customer expectation, workload type, or even system settings. 2. Database Pressure Checking. This step identifies data- base bottlenecks, such as tracking CPU usage and active ses- sions; and monitoring system views (e.g., pg_stat_activity and pgxc_stat_activity) to focus on non-idle sessions. 3. Application Pressure Checking. If there is no apparent pressure on the database or the resource consumption is very low (e.g., CPU usage below 10% and only a few active sessions), it is suggested to investigate the application side, such as exhausted ap- plication server resources, high network latency, or slow processing of queries by application servers. 4. System Pressure Checking. The focus shifts to examining the system resources where the database is located, including CPU usage, IO status, and memory consumption.
2308.05481#19
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
19
Prompt for Tree-Instruct You are an instruction rewriter. You need to rewrite a given user instruction following Procedures step by step. You MUST ONLY return the NEW instruction you rewrite. Procedure: step-1: Parse the old “instruction” to a TREE-1 through Semantic Parsing in the natural language processing field. step-2: EXPAND the above NEW TREE-1 from DEPTH or WIDTH by ADDING “your_added_number” meaningful NEW NODEs as nouns or verbs to form a NEW TREE-2. The new nodes should be constructed with detailed and pertinent information. step-3: Generate a totally NEW “instruction” based on the expanded NEW TREE-2. # Old instruction: “your_instruction” # New instruction:
2308.05696#19
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
20
4. System Pressure Checking. The focus shifts to examining the system resources where the database is located, including CPU usage, IO status, and memory consumption. 5. Database Usage Checking. Lastly, we can investigate sub- optimal database usage behaviors, such as (1) addressing concur- rency issues caused by locking waits, (2) examining database con- figurations, (3) identifying abnormal wait events (e.g., io_event), (4) tackling long/short-term performance declines, and (5) optimizing poorly performing queries that may be causing bottlenecks. # 5 DIAGNOSIS PROMPT GENERATION Instead of directly mapping extracted experience to new cases, next we explore how to teach LLMs to (1) understand the database maintenance tasks and (2) reason over the root causes by itself. Input Enrichment. With a database anomaly 𝑥 as input, we can enrich 𝑥 with additional description information so called input prompt 𝑥 ′. On one hand, 𝑥 ′ helps LLM to better understand the task intent. On the other hand, since database diagnosis is generally a complex task that involves multiple steps, 𝑥 ′ preliminarily implies how to divide the complex task into sub-tasks in a proper order, such that further enhancing the reasoning of LLM.
2308.05481#20
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
20
# Old instruction: “your_instruction” # New instruction: Additionally, we present all three steps into a single prompt, guiding LLMs to implement our require- ments step by step without external semantic parsing tools (see Block 3, where “your_added_number” indicates the desired number of nodes we aim to add to the tree.) Especially, we directly control the complexity by adjusting “your_added_number”. Visually, with more nodes added, the tree and the instruction become more complex. This gradual increase results in a tree with 3, 6, or 10 additional nodes, progressively increasing the complexity of instructions, as shown in Fig. 2. We also observe that adding nodes to the semantic tree constructs a framework for the original instruction. This approach prevents significant deviations from the main topic. GPT-4’s automatic assessment shows that our prompt modifications maintain thematic consistency with a score 0.69. # 4 Experiments
2308.05696#20
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
21
From our observation, the quality of 𝑥 ′ can greatly impact the performance of LLM on maintenance tasks [27] (Figure 2). Thus, we first utilize LLM to suggest candidate prompts based on a small set of input-output pairs (e.g., 5 pairs for a prompt). Second, we rank these generated prompts based on a customized scoring function 4 (e.g., the ratio of detected root causes), and reserve the best prompts (e.g., top-10) as candidates. Finally, we select the best one to serve as the input prompt template for the incoming maintenance tasks. # 6 EXTERNAL TOOL LEARNING As we know, the efficient use of tools is a hallmark of human cognitive capabilities [17, 18]. When human beings encounter a new tool, they start to understand the tool and explore how it works, i.e., taking it as something with particular functions and trying to understand what the functions are used for. Likewise, we aim to inspire similar ability within LLM. Tool Retrieval. We first retrieve the appropriate tools for the diagnosis task at hand, represented as 𝐷𝑡 . There are several methods used, such as BM25, LLM Embeddings, and Dense Retrieval.
2308.05481#21
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
21
# 4 Experiments In this experiment, our primary objective is to address four key research questions: (1) Can Tree- Instruct, compared to WizardLM’s in-depth evolving, better maintain thematic consistency while augmenting complexity? (2) Does increasing the complexity of instructions through Tree-Instruct result in a greater unleashing of LLM’s latent potential, i.e., will more intricate instructions yield better outcomes? (3) Given the same token constraints, which approach is better suited for instruction tuning: employing complex yet limited instruction data or opting for simpler but more diverse instructions? (4) Can curriculum-based instruction tuning methods (from simpler to more complex instruction data) yield improvements similar to the substantial enhancements observed in many previous NLP tasks? Our primary experiments are conducted on Alpaca GPT-4 dataset [29], which contains a dataset of 52,000 instruction-following examples responded to by GPT-4 using prompts in Alpaca [35]. 6
2308.05696#21
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
22
(1) BM25, simply represented as 𝑓 (𝐷𝑡 , 𝑄) = BM25, is a common probabilistic retrieval method that ranks tool descriptions (𝐷) based on their relevance to the given anomaly (𝑄) [19]. (2) LLM Embeddings, denoted as 𝑓 (𝐷𝑡 , 𝐿) = 𝐿𝐿𝑀𝐸 , are a method that converts tool descriptions (𝐷𝑡 ) into embeddings (𝐸𝑡 ) using LLM, i.e., 𝐸𝑡 = 𝐿(𝐷𝑡 ). These embeddings capture the semantic meanings in a multi-dimensional space, hence helping in finding related tools even in the absence of keyword overlap, 𝐷𝑡 = 𝐿𝐿𝑀𝐸 (𝐸𝑡 ).
2308.05481#22
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
22
Following LIMA, we randomly select 1,000 instruction samples to form Alpaca-1K, serving as the starting point for our evolutionary process. We query gpt-4 [27] to execute Tree-Instruct, thereby increasing the complexity of each instruction within Alpaca-1K. In order to analyze the scaling law, we introduce three levels of complexity by augmenting the instructions by adding 3, 6, and 10 additional nodes, respectively. This allows us to observe the impact of these varying complexities on the outcomes. For the modified instructions, we employed gpt-4 once again to generate corresponding responses. To validate our findings, we replicate the results by applying the in-depth evolving with deepening prompt provided by WizardLM to the same Alpaca-1K instructions. To demonstrate the scalability of our discoveries to larger datasets, we also conduct experiments on the expansive OpenChat dataset [38]. We employ the pre-trained LLaMA-13B-v1 [37] model as the initialization, fine-tuning it on instruction-tuning datasets generated through different methods. Each GPU processes batches of size 2 (for OpenChat evolved data,
2308.05696#22
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
23
(3) Dense Retrieval, denoted as 𝑓 (𝑄, 𝐷𝑡 , 𝑁 ) = 𝐷𝑅, uses neural net- works (𝑁 ) to generate dense representations of both the anomaly (𝑄) and the tool descriptions (𝐷𝑡 ), separately denoted as Dense𝑄 and Dense𝐷 . To retrieve the relevant tools, we calculate the simi- larity between Dense𝑄 and Dense𝐷 , and rank them based on these similarity scores. The proper method for tool retrieval depends on the specific scenarios. BM25 is efficient for quick results with large volumes of API descriptions in the tools and clear anomaly characters. LLM Embeddings excel at capturing semantic and syntactic relationships, which is especially useful when relevance isn’t obvious from key- words (e.g., different metrics with similar functions). Dense Retrieval is ideal for vague anomaly, which captures context and semantic meaning, but is more computational costly. # 7 LLM DIAGNOSIS
2308.05481#23
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
23
initialization, fine-tuning it on instruction-tuning datasets generated through different methods. Each GPU processes batches of size 2 (for OpenChat evolved data, the batch size is set to 14), and the maximum sequence length was set to 2048. For optimization, we adopt the AdamW [22] optimizer with a learning rate of 1e-4 and a weight decay of 0.1, following the practices established by OpenChat. The training is performed across 8 A100 GPUs using Deepspeed ZeRO-2 for a duration of 3 epochs. During inference, a temperature of 0.7 and a top-p value of 0.9 are employed to evaluate all the methods under comparison.
2308.05696#23
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
24
# 7 LLM DIAGNOSIS Tree Search Algorithm using LLM. To avoid diagnosis failures caused by the incorrect actions (e.g., non-existent API name) derived by LLM, we propose to utilize the tree of thought strategy that can guide LLM to go back to previous actions if the current action fails. Step1: Tree Structure Initialization. We initialize a tree structure, where root node is the diagnosis request (Figure 4). Utility methods are utilized to manipulate the tree structure, and UCT score for node 𝑣 are computed based on the modifications during planning, i.e., 𝑛 (𝑣) , where ln(𝑁 ) UCT(𝑣) = 𝑛 (𝑣) denotes the selection frequency and 𝑤 (𝑣) denotes the success ratio of detecting root causes. Note, the action of 𝑛(𝑣 fails to call tool API, 𝑤 (𝑣) equals -1. Step2: Simulate Execution. This step kickoffs the execution of simulations starting from the root node of the tree. It involves selecting nodes based on specific standard (e.g., detected abnormal metrics). If the criteria for selecting a new node is met, a new node is chosen; otherwise, the node with the highest UCT value is selected.
2308.05481#24
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
24
# 4.1 Tree-Instruct is Better than In-Depth Evolving We start by investigating whether operating on a tree, as opposed to a sequence, better aligns with the intended objectives of the original instruction. Recent studies have introduced the LLMs-as- evaluator paradigm, leveraging LLMs to assess candidate samples, which closely approximates human evaluative agreement [6, 13, 18, 46]. Consequently, we employ gpt-4 to gauge which approach exhibits greater consistency with the initial instructions. As depicted in Figure 3, the result indicates that employing Tree-Instruct, which entails adding instructions with 6 additional nodes, achieves a higher degree of alignment with the original instructions in 63% of cases, compared to WizardLM’s in-depth deepening that undergoes modifications and generates instructions with similar token quantities to Tree-6-nodes. This observation serves as evidence that the presence of a tree structure constraint enables LLMs to more effectively modify instructions within the framework of the original guidance, rather than diverging and incorporating unrelated content. a ; 30 me Tee wins lm WizardLM wins
2308.05696#24
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
25
[CPU usage] + OPS CPU usage rate or top/sar commands; + If you still cannot analyze the cause of high CPU usage, generate a flame graph to find the bottleneck of the kernel code function. 110 issues] + Hard disk cache/raid write policy configuration problem; + Disk bandwidth is throttled (OBS has flow control). [Memory issues] + Refer to high memory section [Network issues] + Network disconnection; + High network latency; + Full network bandwidth. (2) Check the database-related views, or the relevant indicators to clarify whether the issue is on the database side. [Database-related views] E.g., pg_stat_activity/pgxc_stat_activity views in Postgres - Pay attention to the sessions where the state is not idle. [OPS monitoring indicators] E.g., CPU usage rate, active sessions. 2. Database Pressure Checking 1. Background Understanding 3. Application Pressure Checking 4. System Pressure Checking 5. Database Usage Checking [Concurrency issues] Lock waits due to concurrent updates. [Configuration issues] Knobs like shared_buffers/threadPool. [Long-term performance] Compare performance diagnosis reports (e.g, ,Traces in Oracle, WDR in gaussdb) for different time periods. [Short-term performance
2308.05481#25
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
25
a ; 30 me Tee wins lm WizardLM wins Furthermore, our findings demonstrate that Tree- Instruct is more effective than in-depth evolving in eliciting the capabilities of LLMs. We con- duct evaluations on the AlpacaEval evaluation set for both methods. AlpacaEval is a recent au- thoritative leaderboard comprising 805 diverse samples, each showcasing various abilities. The evaluations are performed with gpt-4 as the evaluator, comparing the win rates of models against text-davinci003. As depicted in Table 2, under similar total token counts, Tree-Instruct exhibits a win rate improvement of 5 points over WizardLM’s in-depth deepening. We attribute this enhancement to Tree-Instruct’s adeptness at closely tailoring instructions to the central topic, thereby introducing complexity. In contrast, in-depth evolving might deviate from the original theme and introduce irrelevant content, resulting in instructions of inadequate difficulty. Such instructions could potentially hinder LLMs from generating appropriate responses, rendering them less effective in the generation process. # 4.2 More Complexity, Better Capability
2308.05696#25
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
26
[Long-term performance] Compare performance diagnosis reports (e.g, ,Traces in Oracle, WDR in gaussdb) for different time periods. [Short-term performance jitters] Seconds-level performance jitter; Inefficient SQL: large volume of slow SQL, slow SQL optimization. ® If the database side does not significantly perceive the workload pressure, and resource usage is extremely low (e.g, CPU usage is less than 10%), it is recommended to check the application side: [Possible problems] - Application server resources are exhausted, e.g., insufficient CPU / 10 / memory; + The network latency between the application server and the kernel is too high; + The application server processes query requests slowly, leading to slow dispatch of the query statements. @
2308.05481#26
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
26
# 4.2 More Complexity, Better Capability After demonstrating the effectiveness of Tree-Instruct in enhancing sample complexity, we present a scaling law pertaining to complexity, as depicted in Fig. 1 and Table 2. As the number of nodes gradually increases from Tree-3-Nodes to Tree-6-Nodes and further to Tree-10-Nodes, the model’s win rate on AlpacaEval, exhibits a remarkable upward trend. This scaling law underscores the significance of complexity within instruction data. 7 Table 1: Win rate of different methods vs. text-davinci003 on the AlpacaEval leaderboard. Method Win Rate (%) Token Length GPT4 LLaMA2-Chat-70B Claude-2 OpenChat-13B-V3.1 ChatGPT WizardLM-13B-V1.2 OpenChat-13B∗ UltraLM-13B WizardLM-13B 95.28 92.66 91.36 89.49 89.37 89.17 80.87 80.64 75.31 1365 1790 1069 1484 827 1635 1632 1087 985 OpenChat-13B+Tree-Instruct 82.00 (+1.13) 1543
2308.05696#26
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
27
Figure 3: The outline of diagnosis experience extracted from documents. Diagnose the root em_usage’): memory is abnormal FA ¢pu_diagnosis_agent({"dead_tuples *: 5000, ...}):many_dead_tuples # Figure 4: Example LLM diagnosis by tree of thought is dedicated to a distinct domain of functions. For example, we include three LLM agents in the initial implementation: (1) Chief DBA is responsible for collaboratively diagnosing and detecting root causes with other agents; (2) CPU Agent is specialized in CPU usage analysis and diagnosis, and (3) Memory Agent focuses on memory usage analysis and diag- nosis. Each LLM agent can automatically invoke tool APIs to retrieve database statistics, extract external knowledge, and conduction optimizations. For instance, CPU Agent utilizes the monitoring tool Prometheus to check CPU usage metrics within specific time periods, and determine the root causes of high CPU usage by matching with extracted experience (Section 4). Note, if CPU/memory agents cannot report useful analysis, Chief DBA is responsible to detect other potential problems, such as those on the application side.
2308.05481#27
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
27
Additionally, we carry out a meticulous evaluation for each skill/category within the Vicuna test sets. These sets are divided into distinct skill sets/categories, allowing for an intricate analysis of the profi- ciency attained through instruction tuning. Notably, Tree-10-Nodes outperforms Tree-6-Nodes across a majority of categories, encompassing Counterfactual, Roleplay, Knowledge, Generic, and more. Similar trends are evident when comparing Tree- 6-Nodes with the original instructions, indicat- ing that augmenting the complexity of Instruc- tion data leads to a comprehensive enhancement in the capabilities of the LLM. = text-davinci-003 —— Tree-10-nodes —— Tree-6-nodes — Alpaca (Tree-0-nodes) counterfactual coding 1009 common-sénse roleplay ‘generic knowledge
2308.05696#27
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
28
Step3: Existing Node Reflection. For each node in the path from the root node to the selected node, reflections are generated based on decisions made at previous nodes. For example, we count on LLM to rethink the benefits of analyzing non-resource relevant metrics. If LLM decides the action cannot find any useful information, the UCT value will be reduced and set to that of its parent node. In this way, we can enhance the diagnosis efficiency. Step4: Terminal Condition. If LLM cannot find any more root cause (corresponding to a leaf node) for a threshold time (e.g., five), the algorithm ends and LLM outputs the final analysis based on the detected root causes. # 8 COLLABORATIVE DIAGNOSIS FOR # COMPLEX CASES A single LLM may be limited in its ability to fully resolve a problem (e.g., stuck in initial steps). Collaborative diagnosis involves the utilization of multiple LLMs to collectively address complex cases by leveraging their unique role capabilities. This section introduces the communicative framework for database diagnosis [1, 16]. • Agents. In the communicative framework, agents can be undertaken by human beings or LLMs. Humans can pro- vide LLM agents with scenario requirements (e.g., business changes over the incoming period) and prior knowledge (e.g., historical anomalies). On the other hand, each LLM agent
2308.05481#28
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
28
Finally, given that our experimentation is based on 1,000 instances, we extend our investiga- tion to validate the effectiveness of Tree-Instruct across a larger dataset using OpenChat. Open- Chat is built upon approximately 6K GPT-4 con- versations derived from around 90K ShareGPT conversations. It has notably achieved top rank- ings as an open-source LLM. As we initiate these experiments, OpenChat attains an 80.87% win rate on AlpacaEval. Since OpenChat in- volves multi-turn conversations, we specifically complexify instructions within single-turn con- versations and certain meaningful concluding turns, rather than those containing generic terms like "stop" or "continue." This modification en- compasses 3,000 conversations. Figure 4: Evaluation of models trained on Alpaca-1K added with various nodes vs. text- davinci003 on categories of the Vicuna test set. As delineated in Table 1, following the com- plexification of Tree-Instruct, we enhance Open- Chat’s performance from 80.87% to 82.00%, underscoring the sustained efficacy of our approach across a larger volume of data. # 4.3 Less but Complex is Better Than More but Simple
2308.05696#28
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
29
• Environment Settings. We need to set a series of principles for the agents to efficiently communicate, such as (1) Chat Order: To avoid the mutual negative influence, we only al- low one LLM agent to “speak” (i.e., appending the analysis results to the chat records to let other agents know) at a time. To ensure flexible chat (e.g., if an agent cannot detect anything useful, it should not speak), we rely on Chief DBA to decide which agent to speak in each iteration (diagnosis scheduling); (2) Visibility: By default, we assume the analysis results of agents can be seen by each other, i.e., within the same chat records. In the future, we can split agents into different groups, where each group is in charge of different database clusters/instances and they do not share the chat records; (3) Selector is vital to filter invalid analysis that may mislead the diagnosis directions; (4) Updater works to update agent memory based on the historical records. • Chat Summary . For a complex database problem, it re- quires agents dozens of iterations to give in-depth analy- sis, leading to extremely long chat records. Thus, it is vi- tal to effectively summarize the critical information from chat records without exceeding the maximal length of LLM prompts. To the end, we progressively summarize the lines 5
2308.05481#29
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
29
# 4.3 Less but Complex is Better Than More but Simple While we have demonstrated that increasing the complexity of instruction data can enhance the capabilities of LLMs, a new question arises: Is this improvement due to the introduction of more training tokens as complexity increases? Our analysis indicates that the average length of the original Alpaca data, combining both input and output, is 186 tokens. Upon incorporating an additional 10 nodes, this count escalates to 607 tokens – equivalent to a 3.26-fold increase in training data. With this question in mind, we introduce a new baseline: Alpaca-4K, trained with 4,000 samples (additionally sampled 3,000 instances from the original Alpaca data). As shown in Table 2, Alpaca-4K’s total token count surpasses that of Tree-10-Nodes by 24%. Despite this, with the same training step, a significant 22% performance gap in win rate remains. However, compared to Alpaca-1K, there is indeed a 2% improvement. This suggests that introducing more instruction tokens does enhance 8 Table 2: Analysis of the complexity scaling laws and win rate-token count relationship.
2308.05696#29
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
30
5 # Table 1: Diagnosis performance of single root causes ( : legal diagnosis results; : accurate diagnosis results). Type Data Insert Slow Query Concurrent Transaction Root Cause INSERT_LARGE_DATA FETCH_LARGE_DATA REDUNDANT_INDEX LACK_STATISTIC_INFO MISSING_INDEXES POOR_JOIN_PERFORMANCE CORRELATED_SUBQUERY LOCK_CONTENTION WORKLOAD_CONTENTION Workload concentration affecting SQL execution CPU_CONTENTION IO_CONTENTION Description Long execution time for large data insertions Fetching of large data volumes Unnecessary and redundant indexes in tables Outdated statistical info affecting execution plan Missing indexes causing performance issues Poor performance of Join operators Non-promotable subqueries in SQL Lock contention issues Severe external CPU resource contention IO resource contention affecting SQL performance LLM+Metrics D-Bot of a record used with tools, including inputs for certain tools and the results returned by these tools. Based on the current summary, it extracts the goals intended to be solved with each call to the tool, and forms a new summary, e.g.,
2308.05481#30
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
30
8 Table 2: Analysis of the complexity scaling laws and win rate-token count relationship. Method Win Rate (%) Total Token Size Alpaca-1K Alpaca-4K WizardLM 26.40 28.60 40.37 186,464 757,042 556,981 Tree-3-Nodes Tree-6-Nodes Tree-10-Nodes 40.80(+14.40) 44.78(+18.38) 50.19(+23.79) 385,760 546,731 608,556 Table 3: Analysis of mixed difficulty training and curriculum Learning. The numbers demonstrate the win rates on various subsets and the overall AlpacaEval test set, vs. text-davinci003.
2308.05696#30
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
31
[Current summary] - I know the start and end time of the anomaly. [New Record] Thought: Now that I have the start and end time of the anomaly, I need to diagnose the causes of the anomaly Action: is_abnormal_metric Action Input: 1684600074, “metric_name”: “cpu_usage”} Observation: “The metric is abnormal” [New summary] - I know the start and end time of the anomaly. - I searched for is_abnormal_metric, and I now know that the CPU usage is abnormal.
2308.05481#31
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
31
Method helpful-base self-instruction oasst koala vicuna Overall Mix-training Hard-to-Easy Curriculum Easy-to-Hard Curriculum 43.41 49.61 52.71 25.39 22.22 26.98 36.70 43.62 49.47 40.38 41.02 41.02 35.00 46.25 50.00 34.78 37.69 41.37 Tree-3-nodes Tree-6-nodes Tree-10-nodes 50.38 55.81 67.44 26.58 29.76 31.74 46.81 52.13 53.19 42.31 46.15 54.48 52.50 53.75 65.00 40.80 44.78 50.19 model performance. Nonetheless, the effectiveness of diverse yet simple instructions still falls short compared to a smaller quantity of more complex directives. # 4.4 Curriculum Learning May Be Not Effective for Instruction Tuning
2308.05696#31
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
32
& [Chief DBA: The identified root causes of the anomaly are high CPU usage due toa high number of running processes, high load average, blocked processes, and low entropy, as well as high memory usage caused by poor join performance and significant amounts of inactive anonymous memory. CPU Agent suggestions: the running processes, load average, blocked processes, and low entropy to optimize CPU usage. Memory Agent suggestions: 1. Check the GUC parameter 'enable_hashj and ensure thatiit is set to ‘on’ 2. Optimize the SQL structure to reduce JOIN cost and consider using temporary tables to filter data. 3. processes or applications running on the system that may be causing the high memory usage and ensure they are property releasing memory when finished. Reset Next Stop Autoplay Start Autoplay Optimization Solutions: Rewrite Slow Query [Memory Agent]: The high memory usage is caused by poor join performance and significant amounts Add Query Hints of inactive anonymous memory. The slow queries involve multiple join operations with large amounts of data, Gather More Info With this communicative framework and well-defined communi- cation principles, the collaborative diagnosis process among human and LLM agents becomes more efficient (e.g., parallel diagnosis) and effective (e.g., chat records could trigger investigating of in-depth metric observation and root cause analysis). # 9 PRELIMINARY EXPERIMENT RESULTS
2308.05481#32
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
32
# 4.4 Curriculum Learning May Be Not Effective for Instruction Tuning Now, armed with three sets of data featuring increasing difficulty levels and aligned themes, we can delve into an unanswered question in instruction tuning: Is it necessary to train LLM progressively from easy to hard? As depicted in Table 3, we embark on a trial, initially training on Tree-3-Nodes data, followed by Tree-6-Nodes, and finally Tree-10-Nodes. Each segment constitutes one-third of the total training steps. We also devise two baselines: one involving the combined training of all three difficulty levels and another wherein difficult samples were trained. prior to the easy ones. Experimental results reveal that, compared to mixed-difficulty training and training difficult samples before easier ones, an easy-to-hard curriculum learning approach truly enhances model performance. However, the performance gain from curriculum learning only slightly surpasses that of exclusively training on Tree-3-Nodes, the simplest dataset we construct. This outcome contrasts with previous observations of curriculum learning. We attribute this variance to the fact that modern LLMs possess parameter counts several times larger than those of earlier models like BERT [12] or T5 [30]. With this substantial parameter increase, LLMs are now capable of directly learning from challenging samples, diminishing the need for foundational exposure to simpler samples. The more exposure to challenging samples, the more the model’s capabilities are ignited. # 5 Conclusion
2308.05696#32
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
33
# 9 PRELIMINARY EXPERIMENT RESULTS Demonstration. As illustrated in Figure 5, Chief DBA monitors the status of the database to detect anomalies. Upon recognizing a new anomaly, Chief DBA notifies both the Memory Agent and CPU Agent. These agents independently assess the potential root causes and communicate their findings (the root causes and recommended solutions) to the Chief DBA. Subsequently, the Chief DBA consol- idates the diagnostic results for the user’s convenience. In initial iterations, these agents generally gather limited information, and so they will continue for multiple iterations until the conclusion of Chief DBA is nearly certain or no further valuable information can be obtained. Additionally, during the diagnosis, users have the option to participate by offering instructions and feedback, such as verifying the effectiveness of a proposed optimization solution. # Figure 5: A basic demonstration of D-Bot. Diagnosis Performance Comparison. We compare the perfor- mance of D-Bot against a baseline, namely llm+Metrics. Both of the two methods are deployed with the OpenAI model GPT-4 [2] alongside metrics and views from PostgreSQL and Prometheus. The evaluation focuses on basic single-cause problems as detailed in Table 1. Besides, we also offer a multi-cause diagnosis example presented in the Appendix-B.
2308.05481#33
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
33
# 5 Conclusion In this study, we have undertaken a preliminary exploration of the intrinsic relationship between instruction complexity and the ability to follow instructions. Our observations include: (1) As the complexity of the instruction data increases, the benefits of instruction tuning continue to amplify. (2) The rise in complexity is partly attributed to additional tokens, yet a few intricate instructions outperform a range of simpler instructions, all within the same token limit. (3) A curriculum-based instruction tuning, progressing from easier to harder, might not yield the desired effectiveness; embracing increased complexity proves essential. We anticipate that this exploration will supplement existing knowledge regarding the aspects of quality, quantity, diversity, and complexity of instruction data. This contribution aims to assist future researchers in constructing superior instruction data. 9 # References [1] Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, and Melisa Russak. Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning. arXiv preprint arXiv:2307.03692, 2023.
2308.05696#33
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
34
Preliminary results indicate that LLM +Metrics and D-Bot can achieve a high legality rate (producing valid responses to specific database issues). However, it is a “dangerous behavior” for LLM +Metrics, which actually has very low success rate (infrequent pro- vision of the correct causes). In contrast, D-Bot achieves both high legal rate and success rate. The reasons are three-fold. First, LLM +Metrics conducts very basic reasoning and often misses key causes. For example, for the INSERT_LARGE_DATA case, LLM +Metrics only finds “high number of running processes” with the node_procs_running metric, and stops early. In contrast, 6 D-Bot not only finds the high concurrency problem, but analyze the operation statistics in the database process and identifies “high memory usage due to heavy use of UPDATE and INSERT operations on xxx tables” by looking up the pg_stat_statements view.
2308.05481#34
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
34
[2] Stephen H Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, Nihal V Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, et al. Promptsource: An integrated development environment and repository for natural language prompts. arXiv preprint arXiv:2202.01279, 2022. [3] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022. [4] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009.
2308.05696#34
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
35
Second, LLM +Metrics often “makes up” reasons without substantial knowledge evidence. For example, for the CORRE- LATED_SUBQUERY case, LLM +Metrics observes SORT operations in logged queries, and incorrectly attributes the cause to “frequent reading and sorting of large amount of data”, thereby ending the diagnostic process. Instead, D-Bot cross-references with the query optimization knowledge, and then finds the correlated-subquery structure might be the performance bottleneck, with additional extracted information like estimated operation costs. Third, LLM +Metrics meet trouble in deriving appropriate solu- tions. LLM +Metrics often gives very generic optimization solutions (e.g., “resolve resource contention issues”), which are useless in practice. Instead, leveraging its tool retrieval component, D-Bot can learn to give specific optimization advice (e.g., invoking query transformation rules, adjusting the work_mem parameter) or gather more insightful information (e.g., “calculate the total cost of the plan and check whether the cost rate of the sort or hash operators exceeds the cost rate threshold”).
2308.05481#35
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
35
[5] Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, et al. Alpagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701, 2023. [6] Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study, 2023. [7] Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, et al. Phoenix: Democratizing chatgpt across languages. arXiv preprint arXiv:2304.10453, 2023. [8] Fanny Chevalier, David Auber, and Alexandru Telea. Structural analysis and visualization of c++ code evolution using syntax trees. In Ninth international workshop on Principles of software evolution: in conjunction with the 6th ESEC/FSE joint meeting, pages 90–97, 2007.
2308.05696#35
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
36
This evaluation reveals the potential of D-Bot in going beyond mere anomaly detection to root cause analysis and provision of actionable suggestions. Despite these advancements, from the basic deployment of D-Bot, there are still some unresolved challenges. First, it is tricky to share the maintenance experience (e.g., varying metric and view names) across different database products. Second, it is labor-intensive to adequately prepare extensive number of anomaly-diagnosis data, which is essential to fine-tune and direct less-capable LLMs (e.g., those smaller than 10B) to understand the complex database knowledge and apply in maintenance. # 10 CONCLUSION In this paper, we propose a vision of D-Bot, an LLM-based data- base administrator that can continuously acquire database main- tenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. We will continue to complete and improve this work with our collaborators. REFERENCES [1] [n.d.]. https://github.com/OpenBMB/AgentVerse. Last accessed on 2023-8. [2] [n.d.]. https://openai.com/. Last accessed on 2023-8. [3] Surajit Chaudhuri and Vivek R. Narasayya. 1997. An Efficient Cost-Driven Index
2308.05481#36
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
36
[9] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. [10] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. [11] Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, and Xiaodan Zhu. Preview, attend and review: Schema-aware curriculum learning for multi-domain dialog state tracking. arXiv preprint arXiv:2106.00291, 2021. [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
2308.05696#36
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
37
Selection Tool for Microsoft SQL Server. In VLDB. 146–155. [4] Karl Dias, Mark Ramacher, Uri Shaft, Venkateshwaran Venkataramani, and Gra- ham Wood. 2005. Automatic Performance Diagnosis and Tuning in Oracle. In Second Biennial Conference on Innovative Data Systems Research, CIDR 2005, Asilo- mar, CA, USA, January 4-7, 2005, Online Proceedings. www.cidrdb.org, 84–94. http://cidrdb.org/cidr2005/papers/P07.pdf
2308.05481#37
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
37
[13] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023. [14] Julia Hancke, Sowmya Vajjala, and Detmar Meurers. Readability classification for german using lexical, syntactic, and morphological features. In Proceedings of COLING 2012, pages 1063–1080, 2012. [15] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689, 2022. [16] Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. arXiv preprint arXiv:2212.12017, 2022. 10
2308.05696#37
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]
2308.05481
38
[5] Shiyue Huang, Ziwei Wang, Xinyi Zhang, Yaofeng Tu, Zhongliang Li, and Bin Cui. 2023. DBPA: A Benchmark for Transactional Database Performance Anomalies. Proc. ACM Manag. Data 1, 1 (2023), 72:1–72:26. https://doi.org/10.1145/3588926 [6] Prajakta Kalmegh, Shivnath Babu, and Sudeepa Roy. 2019. iQCAR: inter-Query Contention Analyzer for Data Analytics Frameworks. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD Conference 2019, Ams- terdam, The Netherlands, June 30 - July 5, 2019, Peter A. Boncz, Stefan Manegold, Anastasia Ailamaki, Amol Deshpande, and Tim Kraska (Eds.). ACM, 918–935. https://doi.org/10.1145/3299869.3319904 [7] Jan Kossmann, Alexander Kastius, and Rainer Schlosser. 2022. SWIRL: Selection of Workload-aware Indexes using Reinforcement Learning. In EDBT. 2:155–2:168. 7
2308.05481#38
LLM As DBA
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data availability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., millions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can continuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (i) database maintenance knowledge detection from documents and tools, (ii) tree of thought reasoning for root cause analysis, and (iii) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
http://arxiv.org/pdf/2308.05481
Xuanhe Zhou, Guoliang Li, Zhiyuan Liu
cs.DB, cs.AI, cs.CL, cs.LG
null
null
cs.DB
20230810
20230811
[ { "id": "2307.07924" }, { "id": "2304.08354" }, { "id": "2307.03762" }, { "id": "2211.01910" } ]
2308.05696
38
10 [17] Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, and Minjoon Seo. Exploring the benefits of training expert language models over instruction tuning. arXiv preprint arXiv:2302.03202, 2023. [18] Yunjie Ji, Yan Gong, Yiping Peng, Chao Ni, Peiyan Sun, Dongyu Pan, Baochang Ma, and Xiangang Li. Exploring chatgpt’s ability to rank content: A preliminary study on consistency with human preferences, 2023. [19] Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, et al. Ope- nassistant conversations–democratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023.
2308.05696#38
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose \textit{tree-instruct} to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24\% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
http://arxiv.org/pdf/2308.05696
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
cs.CL
null
null
cs.CL
20230810
20230810
[ { "id": "2307.12966" }, { "id": "2302.04166" }, { "id": "2304.07327" }, { "id": "2212.12017" }, { "id": "2304.10453" }, { "id": "2305.16264" }, { "id": "2304.12244" }, { "id": "2304.08244" }, { "id": "2306.04751" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2302.03202" }, { "id": "2204.07705" }, { "id": "2306.02707" }, { "id": "1711.05101" }, { "id": "2204.05862" }, { "id": "2303.18223" }, { "id": "2210.11399" }, { "id": "2212.09689" }, { "id": "2307.06018" }, { "id": "2306.17492" }, { "id": "2307.08701" }, { "id": "2202.01279" }, { "id": "2212.10560" }, { "id": "2106.00291" }, { "id": "2306.08568" }, { "id": "2304.03277" }, { "id": "2307.03692" }, { "id": "2305.11206" }, { "id": "2308.01862" } ]