id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.13149#13
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Ability Basic Knowledge Knowledge Application Scientific Calculation Research Ability Total Bio 2147 1379 301 1000 4830 Chem Phy 456 2914 36 3720 1165 3401 0 0 1657 10035 Figure 3: An example of the prompt we used for AO setting. The red text is the response from the model, while the black text is the inputted prompt. Given a question and four options, please select the right answer. Your answer should be "A", "B", "C" or "D".
2308.13149#12
2308.13149#14
2308.13149
[ "2307.03109" ]
2308.13149#14
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Table 2: Statistics of Static Data How many atoms are in 3.5 moles of arsenic atoms? Data Statistics Summarized statistics of SciEval are shown in Table 2, where we only count Static Data. For Dynamic Data, the chemistry part examines the Knowledge Application abil- ity and contains 2000 data, while the physics part evaluates the Scientific Calculation ability and involves 890 data. All 4https://pubchem.ncbi.nlm.nih.gov/ A. 1.5 x 10°24 atoms B. 3.0 x 10°24 atoms C. 2.7 x 10°24 atoms ). 2.1 x 10°24 atoms Answer: Let's think step by step: To find the number of atoms Therefore, the answer is D Figure 4: An example of the prompt we used for CoT setting. The red text is the response from the model, while the blue text and black text are the inputted prompt. Model Creator OpenAI GPT-4 OpenAI GPT-3.5-turbo Claude-v1.3 Anthropic Claude-instant-v1.1 Anthropic ERNIE Bot SparkDesk Vicuna Galactica ChatGLM2 ChatGLM Alpaca MOSS LLaMa Baidu iFLYTEK LMSYS Meta Tsinghua Tsinghua Stanford Fudan Meta #Parameters Access undisclosed undisclosed undisclosed undisclosed undisclosed undisclosed 13B API API API API Web Web Weights â 30B, 6.7B Weights â Weights â Weights â Weights â Weights â Weights â
2308.13149#13
2308.13149#15
2308.13149
[ "2307.03109" ]
2308.13149#15
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
6B 6B 7B 16B 7B, 13B SD DD ED â â â â â â â â â â â â â â â â â â â â â Table 3: Models evaluated in this paper. The â accessâ columns show whether we have full access to the model weights or we can only access through API or web. SD stands for Static Data, DD stands for Dynamic Data, and ED stands for Experimental Data. Marking â â â
2308.13149#14
2308.13149#16
2308.13149
[ "2307.03109" ]
2308.13149#16
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
means we evaluate the corresponding model on this subset. Model Static Data Biology Chemistry Physics Avg. Acc. Chemistry(DD) BLEU MSE Physics(DD) Acc. Exp Score GPT-4 GPT-3.5-turbo Claude-v1.3 Claude-instant-v1.1 Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B ERNIE Bot SparkDesk 84.49 76.42 72.58 70.43 66.48 58.39 57.84 58.62 52.54 56.66 47.71 48.59 36.24 - - 69.38 64.30 59.72 53.36 50.16 53.06 50.77 44.00 45.36 42.43 33.87 33.56 26.38 - - 65.22 52.30 54.94 52.30 44.65 45.13 30.99 40.26 40.80 37.01 31.73 19.48 15.02 - - 73.93 66.97 63.45 58.92 54.96 53.93 50.87 48.44 47.23 46.54 38.23 36.96 28.37 - - 11.05 7.65 5.75 0.45 0.9 0.95 1.55 0.2 0.75 0.2 0.1 0.3 0.5 - - 23.78 18.86 21.98 16.07 4.14 6.50 6.47 1.86 2.44 2.92 7.37 5.21 1.26 - - 891.09 2008.72 1489.87 8258.46 485.99 766.64 5519.82 3449.44 10303.90 428419.27 30505.17 3707.01 11305.65 - - 25.84 21.80 26.14 21.46 22.47 21.24 20.79 24.83 21.01 26.74 24.27 7.08 14.38 - - 93.31 88.27 85.73 87.50 - - - - - - - - - 61.12 33.69
2308.13149#15
2308.13149#17
2308.13149
[ "2307.03109" ]
2308.13149#17
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Table 4: Model performances of Answer-Only setting. The leaderboard is sorted by the average accuracy of Static Data. Experiment Setup Prompts We evaluate LLMs in both Answer-Only (AO) and Chain-Of-Thought (CoT) (Kojima et al. 2022) settings. The prompts we used are shown in Figures 3 and 4 respec- tively. Furthermore, we also evaluate using 3-shot setting, where the three exemplars are selected from the dev set. Models In order to comprehensively assess the scientific capabilities of Large Language Models (LLMs), we eval- uate 15 high-performing LLMs that are widely accessible. These models are selected to represent a diverse range of or- ganizations and vary in size. The details of these models are summarized in Table 3.
2308.13149#16
2308.13149#18
2308.13149
[ "2307.03109" ]
2308.13149#18
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
â ¢ GPT-3.5-turbo and GPT-4 (Schulman et al. 2022; Ope- nAI 2023) are the strongest GPT model variants from OpenAI that have undergone pretraining, instruction tun- ing, and reinforcement learning from human feedback (RLHF, (Ouyang et al. 2022)). â ¢ Claude5, developed by Anthropic, is often considered comparable to GPT-3.5-turbo. We evaluate both the Claude-v1.3 and Claude-instant-v1.1, a lighter version of Claude.
2308.13149#17
2308.13149#19
2308.13149
[ "2307.03109" ]
2308.13149#19
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
â ¢ ERNIE Bot6 is developed by Baidu, possessing deep se- mantic understanding and generation capabilities across modalities and languages. SparkDesk7 is proposed by iFLYTEK. It has cross-domain knowledge and language understanding capabilities and can understand and exe- cute tasks based on natural dialogue. â ¢ LLaMa (Touvron et al. 2023), developed by Meta, is probably the best open-weight foundation model so far. 5https://www.anthropic.com/index/introducing-claude. 6https://yiyan.baidu.com/ 7https://xinghuo.xfyun.cn/
2308.13149#18
2308.13149#20
2308.13149
[ "2307.03109" ]
2308.13149#20
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
80 )) Answer Only S 3 N is} Chain-of-Thought t) ll) | | | | : 3-Shot ra s oO a & * ws s ra of &* b eS «3 â 3 g < we oe 2 e & oo Figure 5: Accuracy on Answer Only, Chain-of-Thought and 3-Shot settings of each LLMs for Static Data. Vicuna (Zheng et al. 2023) and Alpaca (Taori et al. 2023) are both fine-turned from LLaMa with supervised in- struction fine-tuning. It is believed that the performance of Vicuna is better than that of Alpaca.
2308.13149#19
2308.13149#21
2308.13149
[ "2307.03109" ]
2308.13149#21
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
â ¢ Galactica (Taylor et al. 2022) is also developed by Meta, which is trained on a large-scale scientific corpus. It is de- veloped to study the use of language models for the auto- matic organization of science and can perform numerous scientific tasks, such as citation prediction, scientific QA, and molecular property prediction. presented as multiple-choice questions, which can also be evaluated using accuracy. Conversely, the chemistry ques- tions involve complex components, such as â What is the molecular weight of A?â and â What is the SMILES expres- sion of B?â . Hence, for questions with numerical answers, we employ MSE9 as the evaluation metric, while for ques- tions with string answers, we utilize the BELU score (Pap- ineni et al. 2002). Additionally, we also calculate the extract match scores. As for Experimental Data, each experiment consists of multiple open-ended questions. As a result, we assess the model-generated responses manually.
2308.13149#20
2308.13149#22
2308.13149
[ "2307.03109" ]
2308.13149#22
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
â ¢ ChatGLM and ChatGLM2, created by Tsinghua Univer- sity, are based on GLM architecture (Du et al. 2022), and further adapted on conversational data. MOSS (Sun et al. 2023), developed by Fudan University, is the first pub- licly available Chinese LLM, and it follows a training procedure similar to ChatGPT. We evaluate GPT-3.5-turbo, GPT4 and Claude on all three subsets, including Static Data, Dynamic Data, and Exper- imental Data. Since we can only assess ERNIE Bot and SparkDesk through web interface, we evaluate these two models only on the Experimental Data. And for the rest LLMs with billions or tens of billions of parameters, since the length of the Experimental Data exceeds the length limit of these models8, we evaluate them on Static Data and Dy- namic Data, as is shown in Table 3. Evaluation Metrics In the case of Static Data, all ques- tions are objective, making accuracy the appropriate evalu- ation metric. For Dynamic Data, the physics questions are # Experiment Results Answer-Only Setting Answer-only results of all the mod- els on the test set are shown in Table 4 and detailed results of Static Data across different knowledge domains are provided in Appendix B. Analyzing the results of Static Data, GPT- 4 demonstrates significantly superior performance com- pared to other models. And only GPT-4, GPT-3.5-turbo, and Claude-v1.3 achieve an average accuracy exceeding 60%, which highlights the challenge posed by SciEval. For the results of Dynamic Data, GPT-4 performs the best in terms of average accuracy and BLEU score. However, for counting and calculation questions, Galactica-30B yields the best results, indicating its strong aptitude in the field of sci- ence. Conversely, models with billions or tens of billions of parameters perform poorly on the chemistry subset, suggest- ing their limited knowledge about molecules. Regarding the performance of models on the physics subset, since all ques- 8The maximum context length of ChatGLM2 is extended to 32k, while it has limited ability to understand long texts. 9If the predictions do not contain any number, we will regard the MSE as 1 Ã 1010
2308.13149#21
2308.13149#23
2308.13149
[ "2307.03109" ]
2308.13149#23
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Model AO Chemistry CoT 3-Shot AO Physics CoT 3-Shot GPT-4 GPT-3.5-turbo Galactica-6.7B Vicuna-13B Galactica-30B ChatGLM-6B LLaMa-7B LLaMa-13B ChatGLM2-6B Alpaca-7B MOSS-16B 11.05 7.65 1.55 0.95 0.90 0.75 0.50 0.30 0.20 0.20 0.10 12.42â 11.65 â 8.85 â 10.20 â 3.05 â 1.75 â 1.80 â 1.95 â 3.30 â 2.60 â 1.15 â 0.80 â 0.10 â 1.55 â 0.25 â ¼ 2.11 â 1.60 â 2.65 â 2.10 â 0.65 â 0.65 â 0.85 â 25.84 21.80 20.79 21.24 22.47 21.01 18.65 7.08 24.83 26.71 24.27 51.01 â 17.98 â 47.19 â 25.39 â ¼ 23.37 â ¼ 21.12 â ¼ 18.65 â ¼ 23.37â ¼ 22.58 â ¼ 14.72 â 25.39 â ¼ 23.37 â ¼ 27.53 â 9.66 â 5.84 â ¼ 22.70 â 25.39 â ¼ 26.74 â ¼ 28.43 â ¼ 25.62 â ¼ 25.06 â ¼ 26.40 â ¼ Table 5: Results on Answer-Only, Chain-of-Thought and 3-Shot settings of each LLM for Dynamic Data. â means the perfor- mance is slightly better than that under Answer-Only setting, â means worse, and â ¼ means the performance is nearly the same. tions are 4-choices questions, the accuracy should be at least 25%. However, none of these models achieve satisfactory results in this subset.
2308.13149#22
2308.13149#24
2308.13149
[ "2307.03109" ]
2308.13149#24
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
accuracy of 51.01 under 3-Shot setting, the highest among all models, demonstrating its ability to learn from a mere three examples. As for Experimental Data, GPT-series models and Claude-series models achieve good results, while the other two models are not. The detailed scores models reached in each experiment are shown in Appendix C. However, al- though some models could get a great performance, during experiments, we find that these models are good at exper- imental principles and designing, while when it comes to analyzing the experiment results, the performances are not satisfying. Discussion Training on large-scale scientific corpus is helpful. Based on experimental results (Table 4), Galactica (Taylor et al. 2022), which has been trained on an extensive sci- entific corpus, significantly outperforms other LLMs with a comparable number of parameters, although Galactica is trained with a much smaller amount of data. Remarkably, when tested on Dynamic Data, Galactica surpasses the GPT- series and Claude-series LLMs in computational problems. CoT Setting and 3-Shot setting Comparison of experi- ment results among Answer-Only, Chain-of-Thought and 3- Shot settings are shown in Figure 5 and Table 5.10 And we refer detailed results to Appendix A and B. The experimental results from Static Data reveal that solely the GPT-series LLMs get performance enhancement within the CoT setting due to the limited CoT capabilities of other LLMs. As for the 3-Shot setting, roughly half of the LLMs analyzed demonstrate superior performances rela- tive to the Answer-Only setting. The performances of the re- maining LLMs are closely similar to those observed within the Answer-Only setting. From the experimental results derived from Dynamic Data, it is observed that both CoT and 3-Shot significantly enhance the performance of most Language Learning Mod- els (LLMs) in the chemistry subset. However, the perfor- mances achieved are still not up to the mark. In the physics subset, the impact of CoT and 3-Shot on most LLMs is less pronounced, resulting in nearly random performances. Un- der the CoT setting, GPT-3.5-turbo achieves an accuracy of 47.19, suggesting a robust understanding of physical prin- ciples.
2308.13149#23
2308.13149#25
2308.13149
[ "2307.03109" ]
2308.13149#25
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Conversely, the performance of GPT-4 is markedly poor, from which we find that despite its extensive knowl- edge of physical principles, it frequently employs incorrect formulas to solve problems. Nevertheless, GPT-4 attains an 10When evaluating on CoT and 3-Shot settings, Claude-Instant and Claude are not available for us, due to the limitation of API. Most LLMs perform bad on calculation problems, espe- cially in physics domain. Detailed results across various knowledge domains on Static Data (refer to Appendix B) reveal that most LLMs underperform in the Scientific Cal- culation domain, while demonstrate relatively superior per- formance in other domains, which is particularly acute in the field of physics. Similar issues are also observed in Dy- namic Data and Experimental Data. In the context of Dy- namic Data, the mean square error, employed to evaluate cal- culation abilities within the chemistry subset, is exceedingly high for most LLMs, and almost all LLMs can only achieve nearly random performance within the physics subset. Re- garding Experimental Data, our findings indicate that these LLMs struggle with the analysis of experimental results. Conclusion In this paper, we introduce SciEval, a benchmark designed to evaluate scientific capabilities of LLMs. SciEval comprises about 18,000 challenging scientific questions, covering three fundamental fields of science. SciEval assesses the scientific ability of LLMs across four dimensions. It incorporates both objective and subjective questions, and employs dynamic data generation to mitigate potential data leakage. We con- duct comprehensive experiments on various advanced LLMs using SciEval and perform thorough analyses. Our experi- mental results reveal that most LLMs do not perform well on our benchmark, with the exception of the GPT-series and Claude-series LLMs. We hope that SciEval can serve as a robust benchmark for assessing scientific capabilities of LLMs.
2308.13149#24
2308.13149#26
2308.13149
[ "2307.03109" ]
2308.13149#26
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
References Blanco-Gonzalez, A.; Cabezon, A.; Seco-Gonzalez, A.; Conde-Torres, D.; Antelo-Riveiro, P.; Pineiro, A.; and Garcia-Fandino, R. 2023. The role of ai in drug discovery: challenges, opportunities, and strategies. Pharmaceuticals, 16(6): 891. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Zhu, K.; Chen, H.; Yang, L.; Yi, X.; Wang, C.; Wang, Y.; et al. 2023.
2308.13149#25
2308.13149#27
2308.13149
[ "2307.03109" ]
2308.13149#27
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
A sur- vey on evaluation of large language models. arXiv preprint arXiv:2307.03109. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; and Tang, J. 2022. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 320â 335. Forehand, M. 2010. Blooms taxonomy. Emerging perspec- tives on learning, teaching, and technology, 41(4): 47â 56. Frey, N.; Soklaski, R.; Axelrod, S.; Samsi, S.; Gomez- Bombarelli, R.; Coley, C.; and Gadepally, V. 2022. Neural scaling of deep chemical models. Guo, T.; Guo, K.; Liang, Z.; Guo, Z.; Chawla, N. V.; Wiest, O.; Zhang, X.; et al. 2023. What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365. Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.; Song, D.; and Steinhardt, J. 2020. Measuring mas- arXiv preprint sive multitask language understanding. arXiv:2009.03300. Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart, S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874. Huang, Y.; Bai, Y.; Zhu, Z.; Zhang, J.; Zhang, J.; Su, T.; Liu, J.; Lv, C.; Zhang, Y.; Lei, J.; et al. 2023. C-eval: A multi- level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322.
2308.13149#26
2308.13149#28
2308.13149
[ "2307.03109" ]
2308.13149#28
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Jin, D.; Pan, E.; Oufattole, N.; Weng, W.-H.; Fang, H.; and Szolovits, P. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14): 6421. Jin, Q.; Dhingra, B.; Liu, Z.; Cohen, W. W.; and Lu, X. 2019. Pubmedqa: A dataset for biomedical research question an- swering. arXiv preprint arXiv:1909.06146. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022.
2308.13149#27
2308.13149#29
2308.13149
[ "2307.03109" ]
2308.13149#29
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Large language models are zero-shot reason- ers. Advances in neural information processing systems, 35: 22199â 22213. Krathwohl, D. R. 2002. A revision of Bloomâ s taxonomy: An overview. Theory into practice, 41(4): 212â 218. Liang, P.; Bommasani, R.; Lee, T.; Tsipras, D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, A.; et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Lu, P.; Mishra, S.; Xia, T.; Qiu, L.; Chang, K.-W.; Zhu, S.- C.; Tafjord, O.; Clark, P.; and Kalyan, A. 2022. Learn to explain: Multimodal reasoning via thought chains for sci- ence question answering. Advances in Neural Information Processing Systems, 35: 2507â 2521. Luo, R.; Sun, L.; Xia, Y.; Qin, T.; Zhang, S.; Poon, H.; and Liu, T.-Y. 2022.
2308.13149#28
2308.13149#30
2308.13149
[ "2307.03109" ]
2308.13149#30
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
BioGPT: generative pre-trained trans- former for biomedical text generation and mining. Briefings in Bioinformatics, 23(6): bbac409. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730â
2308.13149#29
2308.13149#31
2308.13149
[ "2307.03109" ]
2308.13149#31
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
27744. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine trans- In Proceedings of the 40th annual meeting of the lation. Association for Computational Linguistics, 311â 318. Schulman, J.; Zoph, B.; Kim, C.; Hilton, J.; Menick, J.; Weng, J.; Uribe, J. F. C.; Fedus, L.; Metz, L.; Pokorny, M.; et al. 2022. ChatGPT: Optimizing language models for dia- logue. OpenAI blog. Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S. S.; Wei, J.; Chung, H. W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. 2023. Large language models encode clinical knowl- edge. Nature, 1â 9. Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A. A. M.; Abid, A.; Fisch, A.; Brown, A. R.; Santoro, A.; Gupta, A.; Garriga- Alonso, A.; et al. 2022. Beyond the imitation game: Quanti- fying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
2308.13149#30
2308.13149#32
2308.13149
[ "2307.03109" ]
2308.13149#32
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Sun, T.; Zhang, X.; He, Z.; Li, P.; Cheng, Q.; Yan, H.; Liu, X.; Shao, Y.; Tang, Q.; Zhao, X.; Chen, K.; Zheng, Y.; Zhou, Z.; Li, R.; Zhan, J.; Zhou, Y.; Li, L.; Yang, X.; Wu, L.; Yin, Z.; Huang, X.; and Qiu, X. 2023. MOSS: Training Conver- sational Language Models from Synthetic Data. Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; and Hashimoto, T.
2308.13149#31
2308.13149#33
2308.13149
[ "2307.03109" ]
2308.13149#33
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
B. 2023. Stan- ford alpaca: An instruction-following llama model. Taylor, R.; Kardas, M.; Cucurull, G.; Scialom, T.; Hartshorn, A.; Saravia, E.; Poulton, A.; Kerkez, V.; and Stojnic, R. 2022. GALACTICA: A Large Language Model for Science. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023.
2308.13149#32
2308.13149#34
2308.13149
[ "2307.03109" ]
2308.13149#34
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Llama: Open and efficient founda- tion language models. arXiv preprint arXiv:2302.13971. WANG, F.; and MIAO, Q. 2023. Novel Paradigm for AI- driven Scientific Research: From AI4S to Intelligent Sci- ence. Bulletin of Chinese Academy of Sciences (Chinese Version), 38(4): 536â 540. Wang, X.; Hu, Z.; Lu, P.; Zhu, Y.; Zhang, J.; Subramaniam, S.; Loomba, A. R.; Zhang, S.; Sun, Y.; and Wang, W. 2023. SciBench: Evaluating College-Level Scientific Problem- Solving Abilities of Large Language Models. arXiv preprint arXiv:2307.10635. Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E.; et al. 2023. Judg- ing LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv preprint arXiv:2306.05685. Zhong, W.; Cui, R.; Guo, Y.; Liang, Y.; Lu, S.; Wang, Y.; Saied, A.; Chen, W.; and Duan, N. 2023.
2308.13149#33
2308.13149#35
2308.13149
[ "2307.03109" ]
2308.13149#35
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Agieval: A human- centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. # A Detailed Results on Dynamic Data In this section, we show detailed results on the Chemistry subset of Dynamic Data under Chain-of-Thought (Table 6), and 3-Shot settings (Table 7). The performance comparison under different settings can be found in Table 5 of the main body. # B Detailed Results on Static Data In this section, we show detailed results on Static Data across different knowledge domains under Answer-Only (Table 9), Chain-of-Thought (Table 10) and 3-Shot settings (Table 11), and the overall results are shown in Table 8. # C Detailed Results on Experimental Data In this section, we show detailed results in each experiment, referred to Table 12. Each category contains four experi- ments, and each experiment is composed of several ques- tions.
2308.13149#34
2308.13149#36
2308.13149
[ "2307.03109" ]
2308.13149#36
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Model Acc. Chemistry BLEU MSE GPT-4 GPT-3.5-turbo Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B 11.65 10.2 2.6 1.95 1.75 2.65 0.8 0.65 0.85 0.25 0.1 16.13 12.93 0.52 3.28 2.67 0.83 1.33 1.58 3.74 0.85 0.74 156.34 1336.76 12155.50 71509.65 11517.12 1113845.91 36150.04 413735.26 145736.31 791120.58 22521.28 Table 6: Detailed results on Chemistry subset of Dynamic Data under Chain-of-Thought setting. Model Acc. Chemistry BLEU MSE GPT-4 GPT-3.5-turbo Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B 12.42 8.85 3.30 1.80 3.05 1.60 1.15 2.10 0.65 2.11 1.55 26.97 24.92 12.08 9.24 5.93 5.05 4.24 5.85 9.00 9.69 7.80 191.99 483.39 264.58 88.79 324.05 1080.68 5578.05 2068.95 13811.04 423.60 598.44 Table 7: Detailed results on Chemistry subset of Dynamic Data under 3-Shot setting.
2308.13149#35
2308.13149#37
2308.13149
[ "2307.03109" ]
2308.13149#37
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Model AO CoT 3-Shot 73.93 GPT-4 GPT-3.5-turbo 66.97 Galactica-30B 54.96 53.93 Vicuna-13B Galactica-6.7B 50.87 ChatGLM2-6B 48.44 47.23 ChatGLM-6B 46.54 Alpaca-7B 38.23 MOSS-16B 36.96 LLaMa-13B 28.37 LLaMa-7B 79.76 68.28 41.56 53.34 36.93 48.22 39.48 40.57 35.92 33.53 24.56 80.09 68.89 53.45 50.50 49.39 47.65 46.59 47.85 42.00 42.49 35.37 Table 8: Overall results on Static Data unser Answer-Only (AO), Chain-of-Thought (CoT) and 3-Shot settings. # D Dataset Example In this section, we show examples of different disciplines, different knowledge domains, and different subsets, includ- ing Static Data (Figures 6 to 15) and Dynamic Data (Fig- ures 16 and 17).
2308.13149#36
2308.13149#38
2308.13149
[ "2307.03109" ]
2308.13149#38
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
What is ovulation? A. Fusion of sperm and egg during fertilization B. Release of hormones from the pituitary gland C. Release of secondary oocyte from the ovary during the menstrual cycle D. Formation of a mature egg in the ovary Answer: C Figure 6: A biology example of Basic Knowledge domain in Static Data. Model BK Biology KA SC RA BK Chemistry KA SC BK Physics KA SC GPT-4 GPT-3.5-turbo Claude-v1.3 Claude-instant-v1.1 Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B 94.29 90.61 90.92 88.80 77.85 80.13 66.86 71.21 66.34 62.30 51.92 55.03 31.33 80.81 61.94 62.35 54.98 45.18 40.24 36.36 35.38 34.66 37.81 30.85 30.69 28.10 89.14 77.90 76.78 76.78 65.92 67.79 57.68 58.80 53.93 50.19 38.20 45.32 22.47 67.08 65.40 45.98 50.33 71.54 33.82 68.08 63.50 47.10 72.43 64.73 60.38 62.16 92.94 84.57 85.11 80.45 66.36 64.80 54.52 56.78 54.41 48.49 39.40 37.08 21.15 30.24 52.45 24.04 10.91 38.41 53.89 79.82 31.74 46.11 49.71 28.87 60.42 52.97 68.79 52.86 55.84 51.42 42.16 42.59 33.01 39.19 37.23 33.60 31.63 17.11 17.53 92.65 87.50 89.22 85.05 73.53 71.08 57.60 61.76 62.74 55.39 42.40 41.18 17.89 93.10 82.76 93.10 82.76 65.52 62.07 65.51 62.07 82.76 79.31 68.96 58.62 41.38 53.70 37.66 40.44 38.62 32.76 34.49 19.60 31.22 31.03 28.63 26.51 9.89 13.16
2308.13149#37
2308.13149#39
2308.13149
[ "2307.03109" ]
2308.13149#39
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Table 9: Detailed Model Performances of Answer-Only setting across different knowledge domains on Static Data. â BKâ stands for Basic Knowledge, â KAâ stands for Knowledge Application, â SCâ stands for Scientific Calculation, and â RAâ stands for Research Ability. Model BK Biology KA SC RA BK Chemistry KA SC BK Physics KA SC 93.57â GPT-4 89.52â GPT-3.5-turbo 61.05â Galactica-30B 79.15â Vicuna-13B Galactica-6.7B 53.59â ChatGLM2-6B 64.99â 55.39â ChatGLM-6B 53.53â Alpaca-7B 50.47â MOSS-16B 41.86â LLaMa-13B 28.42â LLaMa-7B 78.95â 65.18â 38.22â 44.29â 30.77â 34.90â 31.26â 32.87â 29.88â 20.89â 15.38â 88.39â 81.65â 51.31â 65.54â 47.19â 53.93â 43.82â 44.57â 40.82â 34.08â 24.72â 66.63â 58.04â 67.08â 56.58â 69.53â 57.92â 51.67â 60.16â 60.82â 70.31â 64.51â 92.52â 83.54â 46.77â 64.03â 44.10â 53.46â 44.67â 44.48â 39.56â 33.07â 23.82â 54.08â 24.76â 32.27â 35.27â 22.86â 36.51â 26.84â 33.38â 12.67â 2.03â 18.88â 77.46â 66.99â 27.05â 42.13â 23.98â 39.02â 32.58â 32.61â 31.96â 20.77â 18.81â 92.65â ¼ 93.10â ¼ 71.18â 60.33â 93.10â 84.56â 22.48â 65.52â 54.17â 46.01â
2308.13149#38
2308.13149#40
2308.13149
[ "2307.03109" ]
2308.13149#40
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
72.41â 75.00â 13.21â 58.62â 46.08â 36.02â 65.52â 58.33â 28.63â 65.52â 51.22â 27.66â 58.62â 50.24â 28.15â 75.86â 37.99â 15.66â 37.93â 37.99â 17.96â 37.93â 24.75â Table 10: Detailed Model Performances of Chain-of-Thought setting across different knowledge domains on Static Data. â means the performance is slightly better than that under Answer-Only setting, â means the performance is worse, and â ¼ means the performance is nearly the same.
2308.13149#39
2308.13149#41
2308.13149
[ "2307.03109" ]
2308.13149#41
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
A population of trout lives in a small lake. Some of the trout have a mutation that makes them more colorful. What are some reasons this population is not at Hardy-Weinberg equilibrium? A. No sexual dimorphism, constant population size, no selection, non-overlapping generations B. No sexual reproduction, equal allele frequencies, non-diploid organisms, no migration C. Infinitely large population, no overlapping generations, no mutations, random mating D. Not infinitely large population, overlapping generations, mutations present, non-random mating Answer: D
2308.13149#40
2308.13149#42
2308.13149
[ "2307.03109" ]
2308.13149#42
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
The bones of a prehistoric man found in the desert of new Mexico contain approximately 5% of the original amount of carbon 14. If the half-life of carbon 14 is 5600 years, approximately how long ago did the man die? # A. 7523 years B. 10412 years # C. 9350 years D. 8678.5 years # Answer: D Figure 8: A biology example of Scientific Calculation do- main in Static Data. Figure 7: A biology example of Knowledge Application do- main in Static Data. Model BK Biology KA SC RA BK Chemistry KA SC BK Physics KA SC 94.97â GPT-4 90.82â GPT-3.5-turbo 76.45â Galactica-30B 79.41â Vicuna-13B Galactica-6.7B 64.83â ChatGLM2-6B 72.10â 61.51â ChatGLM-6B 65.82â Alpaca-7B 54.20â MOSS-16B 64.00â LLaMa-13B 37.14â LLaMa-7B 81.62â 62.19â 41.30â 44.37â 33.60â 36.03â 32.23â 35.71â 29.80â 32.39â 29.15â 91.01â 80.52â 66.67â 67.04â 51.31â 57.68â 56.55â 57.30â 43.07â 48.69â 34.46â 78.01â 61.72â 84.11â 55.36â 70.98â 65.29â 53.68â 70.76â 60.60â 35.16â 49.44â 93.16â 84.84â 67.05â 64.64â 53.34â 58.15â 51.97â 47.46â 41.62â 40.93â 33.68â 66.23â 69.24â 31.29â 9.93â 67.08â 18.62â 53.49â 60.48â 58.52â 61.53â 58.13â 71.18â 52.57â 40.14â 45.36â 32.68â 39.12â
2308.13149#41
2308.13149#43
2308.13149
[ "2307.03109" ]
2308.13149#43
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
34.80â 33.40â 30.49â 31.01â 26.46â 93.14â 88.24â 69.36â 70.59â 59.31â 64.70â 64.22â 56.37â 42.65â 47.55â 30.64â Table 11: Detailed Model Performances of 3-Shot setting across different knowledge domains on Static Data. 1 Biology 2 3 4 1 Chemistry 3 2 4 1 2 Physics 3 4 Avg 95 90 90 97.5 90 0 92 90 84 82 76 60 100 90 85 95 85 20 100 100 97.5 95 98.33 72 96.25 90.62 81.25 93.75 15 15 88 88 88 95 66 60 72.5 80 80 70 50 30 95 90 88 90 65 36 99 99 92 97 78 32 97.14 95.71 93.57 94.28 61.43 25.71 98.57 87.14 90.71 87.14 0 28.57 86.25 58.75 58.75 53.33 48.75 25 93.31 88.27 85.73 87.5 61.12 33.69 Table 12: Detailed scores model reached in each experiment. GPT-series models and Claude-series models achieve a good performance.
2308.13149#42
2308.13149#44
2308.13149
[ "2307.03109" ]
2308.13149#44
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
To investigate the role of human T-lymphotrophic virus type I (HTLV-I) infection in four patients who developed slowly progressive myelopathy with abnormal MRI lesions in the cervical cord levels. Clinical and neuroradiologic examinations were performed, and the odds that an HTLV-I-infected individual of specified genotype, age, and provirus load had HTLV-I-associated myelopathy (HAM)/tropical spastic paraparesis (TSP) were calculated. What is the difference between an alkane, an alkene, and an alkyne? A. Alkane: double bond; Alkene: single bond; Alkyne: triple bond B. Alkane: single bond; Alkene: double bond; Alkyne: triple bond C. Alkane: triple bond; Alkene: double bond; Alkyne: single bond D. Alkane: single bond; Alkene: triple bond; Alkyne: double bond
2308.13149#43
2308.13149#45
2308.13149
[ "2307.03109" ]
2308.13149#45
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
# Answer: B Anti-HTLV-I antibodies were positive in both the serum and the CSF in all of the patients. Biopsied sample from spinal cord lesions showed inflammatory changes in Patient 1. Patient 2 had a demyelinating type of sensorimotor polyneuropathy. Two of the three patients examined showed high risk of developing HAM/TSP in virologic and immunologic aspects. Figure 10: A chemistry example of Basic Knowledge do- main in Static Data.
2308.13149#44
2308.13149#46
2308.13149
[ "2307.03109" ]
2308.13149#46
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Chronic progressive cervical myelopathy with HTLV-I infection: Variant form of HAM/TSP? # Answer: yes How would you separate a mixture of alcohol and water? Figure 9: A biology example of Research Ability domain in Static Data. A. Freeze the mixture, remove solid water, then melt remaining alcohol. B. Shake the mixture, let it settle, then remove separated layers. C. Heat the mixture, collect evaporated alcohol, then collect evaporated water. D. Filter the mixture through a membrane, then evaporate collected water. # Answer: C Figure 11: A chemistry example of Knowledge Application domain in Static Data. â Na,PO, dissolves in water to produce an electrolyte solution. What is the Osmolarity of a 2.0 * 10°(-3) M Na,PO, solution? A. 8.0 * 104-3) osmol LA(-1) B. 6.0 * 10%(-3) osmol Lâ (-1) C. 12.0 * 104(-3) osmol LA(-1) D. 2.0 * 104-3) osmol LA(-1) Answer: A Figure 12: A chemistry example of Scientific Calculation domain in Static Data.
2308.13149#45
2308.13149#47
2308.13149
[ "2307.03109" ]
2308.13149#47
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
How can momentum be decreased? A. Decrease mass or velocity, or transfer momentum through collision. B. Keep mass and velocity constant, avoid collisions. C. Increase mass and velocity, avoid collisions. D. Increase mass, decrease velocity, and avoid collisions. Answer: A Figure 13: A physics example of Basic Knowledge domain in Static Data. If i run down some stairs and stop, what happens to your kinetic energy and your initial gravitational potential energy? A. Kinetic energy increases; potential energy decreases. B. Kinetic energy becomes zero; potential energy increases. C. Kinetic energy decreases; potential energy becomes zero. D. Kinetic energy becomes zero; potential energy decreases. Answer: D Figure 14: A physics example of Knowledge Application domain in Static Data. An object with a mass of 8 kg is traveling in a circular path of a radius of 12 m. If the object's angular velocity changes from 5 Hz to 7 Hz in 6 s, what torque was applied to the object? A. 4825.4Nm B. 3620.05 Nm C. 2412.7 Nm D. 1206.35 Nm Answer:
2308.13149#46
2308.13149#48
2308.13149
[ "2307.03109" ]
2308.13149#48
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
C Figure 15: A physics example of Scientific Calculation do- main in Static Data. What is the molecular formula of (2R,5S)-5-ethyl-2-methylnonanal? Answer: C12H240 What is the molecular weight of (3E,6E)-5,5-dimethylocta-1,3,6- triene? # Answer (numerical number): 136.23 Figure 16: Two chemistry examples in Dynamic Data.
2308.13149#47
2308.13149#49
2308.13149
[ "2307.03109" ]
2308.13149#49
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Calculate the total energy released in the accretion disk of a black hole with a mass of 9 solar masses, a radius of 68 kilometers, and an accretion rate of 0.2 solar masses per year. Assume the disk is made of gas that is 10% hydrogen and 90% helium and has a temperature of 10 million Kelvin. Please note that the following content involves constants might be used in the calculations : gravitation constant : G = 6.674e-11 N-mâ 2/kg*2 ; solar mass = 1.98930 kg ; velocity of light: c = 3e8 m/s.
2308.13149#48
2308.13149#50
2308.13149
[ "2307.03109" ]
2308.13149#50
SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Additionally, please select the option from the given choices that you believe is closest to the correct answer! A. 5.13e+38 J B. 6.83e+38 J C.5.81e+38 J D. 2.49e+38 J # Answer: D Figure 17: An physics examples in Dynamic Data.
2308.13149#49
2308.13149
[ "2307.03109" ]
2308.12966#0
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
3 2 0 2 t c O 3 1 ] V C . s c [ 3 v 6 6 9 2 1 . 8 0 3 2 : v i X r a # Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond # Jinze Baiâ Shuai Baiâ Shusheng Yangâ Shijie Wang Sinan Tan Peng Wang Junyang Lin Chang Zhouâ Jingren Zhou # Alibaba Group Code & Demo & Models: https://github.com/QwenLM/Qwen-VL # Abstract In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously de- signed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image descrip- tion and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen- VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answer- ing, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. All models are public to facilitate future research. # voav2dev RefCOCOg(test) RefCOCO+(testB) RefCOCO(testB) OKVQA OCR-VQA Al2D Flickr30k GOA TextvQA ChartQa Generalist VL SOTAS Shikra-13B Pic2Struct-Large (1.3B) InstructBLIP (Vicuna-13B) Qwen-VL Figure 1: Qwen-VL achieves state-of-the-art performance on a broad range of tasks compared with other generalist models. â Equal contribution, â Corresponding author 1
2308.12966#1
2308.12966
[ "2211.01335" ]
2308.12966#1
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
@ (4<6010(750,0),(999,999)</booP RIE â What are the two cities in the above pictures? Please compare them. Am, According to the picture, which floor BPRinaR, mAMEBAZâ . fv, â The first image isthe city skyline of Chongging, which reflects the Se siegeey be oat hustle and bustle of a modem metropolis. The second image isthe =, ; ; skyline of Beijing, symbolizing the modernization and â According to the sign on the wall, the surgery ro internationalization of the Chinese capital. Both cities are important @ mere â should be on the 3F floor. jin China, with unique cultures and development histories. (© | Wich oor shou the otolaryngology BRR ERA, MAME | Om sepertnent be on Sh, RRA. y Te oisnoloay department soul beenthe | >> 4F floor. vd ~ ® ocesis pice ABSTRACT In this work, we introduce the Owen-VL series, a set of large-scale vision-language models (LVLMSs) designed to perceive and oe understand both texts and images. Starting from the Owen-LM as a â The solution inthe image isto find the | foundation, we assign it visual capacity by meticulously designed () © minimum valu in an array, And there is visual receptor, i) input-output interface, (ii) 3-stage training Am â bug in the Function. Fix the bug. eee ee ee ee SP the conventional description and question-answering, we inject the grounding ability ito Owen-VLs by importing fine-grained image- {nt solution(int AU] int) { caption-box pais. The resulting models, including Owen-VL and. int ans = A[0]; Owen-VL-Chat, set new records on a broad range of visual-centric for (int i= 1; i <n; it+) { benchmarks (¢.g., image captioning, question answering, visual â if (Afi) <ans) £73 grounding) under different settings (e.g., zero-shot, few-shot). ans. ils Moreover, on real-world dialog benchmarks, our instruction-tuned } â Owen-VL-Chat also demonstrates conspicuous superiority compared to existing vision-language chatbots.
2308.12966#0
2308.12966#2
2308.12966
[ "2211.01335" ]
2308.12966#2
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
All models will be made public to facilitate future research. retum ans; y Figure 2: Some qualitative examples generated by our Qwen-VL-Chat. Qwen-VL-Chat supports multiple image inputs, multi-round dialogue, multilingual conversation, text-reading, localization, fine-grained recognition and understanding ability. # 1 Introduction Recently, Large Language Models (LLMs) (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023; Gao et al., 2023; Qwen, 2023) have attracted wide attention due to their powerful capabilities in text generation and comprehension. These models can be further aligned with user intent through fine-tuning instructions, showcasing strong interactive capabilities and the potential to enhance productivity as intelligent assistants. However, native large language models only live in the pure-text world, lacking the ability to handle other common modalities (such as images, speech, and videos), resulting in great restrictions on their application scope. Motivated by this, a group of Large Vision Language Models (LVLMs) (Alayrac et al., 2022; Chen et al., 2022; Li et al., 2023c; Dai et al., 2023; Huang et al., 2023; Peng et al., 2023; Zhu et al., 2023; Liu et al., 2023; Ye et al., 2023b,a; Chen et al., 2023a; Li et al., 2023a; Zhang et al., 2023; Sun et al., 2023; OpenAI, 2023) have been developed to enhance large language models with the ability to perceive and understand visual signals. These large-scale vision-language models demonstrate promising potential in solving real-world vision-central problems. Nevertheless, despite that lots of works have been conducted to explore the limitation and potency of LVLMs, current open-source LVLMs always suffer from inadequate training and optimization, thus lag far behind the proprietary models (Chen et al., 2022, 2023b; OpenAI, 2023), which hinders further exploration and application of LVLMs in open-source community.
2308.12966#1
2308.12966#3
2308.12966
[ "2211.01335" ]
2308.12966#3
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Whatâ s more, as real-world visual scenarios are quite complicated, fine-grained visual understanding plays a crucial role for LVLMs to assist people effectively and precisely. But only a few attempts had been made toward this direction (Peng et al., 2023; Chen et al., 2023a), the majority of open-source LVLMs remain perceiving the image in a coarse-grained approach and lacking the ability to execute fine-grained perception such as object grounding or text reading.
2308.12966#2
2308.12966#4
2308.12966
[ "2211.01335" ]
2308.12966#4
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
2 In this paper, we explore a way out and present the newest members of the open-sourced Qwen families: Qwen-VL series. Qwen-VLs are a series of highly performant and versatile vision-language foundation models based on Qwen-7B (Qwen, 2023) language model. We empower the LLM basement with visual capacity by introducing a new visual receptor including a language-aligned visual encoder and a position- aware adapter. The overall model architecture as well as the input-output interface are quite concise and we elaboratedly design a 3-stage training pipeline to optimize the whole model upon a vast collection of image-text corpus. Our pre-trained checkpoint, termed Qwen-VL, is capable of perceiving and understanding visual inputs, generating desired responses according to given prompts, and accomplishing various vision-language tasks such as image captioning, question answering, text-oriented question answering, and visual grounding. Qwen-VL-Chat is the instruction-tuned vision-language chatbot based on Qwen-VL. As shown in Fig. 2, Qwen-VL-Chat is able to interact with users and perceive the input images following the intention of users. Specifically, the features of the Qwen-VL series models include: â ¢ Leading performance: Qwen-VLs achieve top-tier accuracy on a vast of vision-centric understanding benchmarks compared to counterparts with similar scales. Besides, Qwen-VLâ s stuning performance covers not only the conventional benchmarks e.g., captioning, question-answering, grounding), but also some recently introduced dialogue benchmarks. â ¢ Multi-lingual: Similar to Qwen-LM, Qwen-VLs are trained upon multilingual image-text data with a considerable amount of corpus being in English and Chinese. In this way, Qwen-VLs naturally support English, Chinese, and multilingual instructions. â ¢ Multi-image: In the training phase, we allow arbitrary interleaved image-text data as Qwen-VLâ s inputs. This feature allows our Qwen-Chat-VL to compare, understand, and analyze the context when multiple images are given. â ¢ Fine-grained visual understanding: Thanks to the higher-resolution input size and fine-grained corpus we used in training, Qwen-VLs exhibit highly competitive fine-grained visual understanding ability.
2308.12966#3
2308.12966#5
2308.12966
[ "2211.01335" ]
2308.12966#5
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Compared to existing vision-language generalists, our Qwen-VLs possess much better grounding, text-reading, text-oriented question answering, and fine-grained dialog performance. # 2 Methodology # 2.1 Model Architecture The overall network architecture of Qwen-VL consists of three components and the details of model parameters are shown in Table 1: Large Language Model: Qwen-VL adopts a large language model as its foundation component. The model is initialized with pre-trained weights from Qwen-7B (Qwen, 2023). Visual Encoder: The visual encoder of Qwen-VL uses the Vision Transformer (ViT) (Dosovitskiy et al., 2021) architecture, initialized with pre-trained weights from Openclipâ s ViT-bigG (Ilharco et al., 2021). During both training and inference, input images are resized to a specific resolution. The visual encoder processes images by splitting them into patches with a stride of 14, generating a set of image features. Position-aware Vision-Language Adapter: To alleviate the efficiency issues arising from long image feature sequences, Qwen-VL introduces a vision-language adapter that compresses the image features. This adapter comprises a single-layer cross-attention module initialized randomly. The module uses a group of trainable vectors (Embeddings) as query vectors and the image features from the visual encoder as keys for cross- attention operations. This mechanism compresses the visual feature sequence to a fixed length of 256. The ablation about the number of queries is shown in Appendix E.2. Additionally, considering the significance 3 of positional information for fine-grained image comprehension, 2D absolute positional encodings are incorporated into the cross-attention mechanismâ s query-key pairs to mitigate the potential loss of positional details during compression. The compressed image feature sequence of length 256 is subsequently fed into the large language model. # Table 1: Details of Qwen-VL model parameters. Vision Encoder VL Adapter LLM Total 1.9B 0.08B 7.7B 9.6B Stagel: Pretrainin Stage2:Multi-task Stage3: Supervised ee 6 Pretraining Finetuning d a N Learnable N Learnable N =| Query â | CrossAttn ad Query â CrossAttn ad Embs Embs Learnable Query Embs â â â â ViT & ViT # â
2308.12966#4
2308.12966#6
2308.12966
[ "2211.01335" ]
2308.12966#6
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
o | Low Resolution a High Resolution â o | High Resolution. Image-Text Pairs Multi-task an Chat Interleaved â ¬ Interleaved VL Data VL Data Figure 3: The training pipeline of the Qwen-VL series. # 2.2 Inputs and Outputs Image Input: Images are processed through the visual encoder and adapter, yielding fixed-length sequences of image features. To differentiate between image feature input and text feature input, two special tokens (<img> and </img>) are appended to the beginning and end of the image feature sequence respectively, signifying the start and end of image content. Bounding Box Input and Output: To enhance the modelâ s capacity for fine-grained visual understanding and grounding, Qwen-VLâ s training involves data in the form of region descriptions, questions, and detections. Differing from conventional tasks involving image-text descriptions or questions, this task necessitates the modelâ s accurate understanding and generation of region descriptions in a designated format. For any given bounding box, a normalization process is applied (within the range [0, 1000)) and transformed into a specified string format: "(Xtoplef t, Ytoplef t), (Xbottomright, Ybottomright)". The string is tokenized as text and does not require an additional positional vocabulary. To distinguish between detection strings and regular text strings, two special tokens (<box> and </box> are added at the beginning and end of the bounding box string. Additionally, to appropriately associate bounding boxes with their corresponding descriptive words or sentences, another set of special tokens (<ref> and </ref>) is introduced, marking the content referred to by the bounding box.
2308.12966#5
2308.12966#7
2308.12966
[ "2211.01335" ]
2308.12966#7
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
4 # 3 Training As illustrated in Fig. 3, the training process of the Qwen-VL model consists of three stages: two stages of pre-training and a final stage of instruction fine-tuning training. # 3.1 Pre-training In the first stage of pre-training, we mainly utilize a large-scale, weakly labeled, web-crawled set of image-text pairs. Our pre-training dataset is composed of several publicly accessible sources and some in-house data. We made an effort to clean the dataset of certain patterns. As summarized in Table 2, the original dataset contains a total of 5 billion image-text pairs, and after cleaning, 1.4 billion data remain, with 77.3% English (text) data and 22.7% Chinese (text) data. Table 2: Details of Qwen-VL pre-training data. LAION-en and LAION-zh are the English and Chinese language subset of LAION-5B (Schuhmann et al., 2022a). LAION-COCO (Schuhmann et al., 2022b) is a synthetic dataset generated from LAION-en. DataComp (Gadre et al., 2023) and Coyo (Byeon et al., 2022) are collections of image-text pairs. CC12M (Changpinyo et al., 2021), CC3M (Sharma et al., 2018), SBU (Ordonez et al., 2011) and COCO Caption (Chen et al., 2015) are academic caption datasets. Language Dataset Original Cleaned Remaining% English LAION-en LAION-COCO DataComp Coyo CC12M CC3M SBU COCO Caption 2B 600M 1.4B 700M 12M 3M 1M 0.6M 280M 300M 300M 200M 8M 3M 0.8M 0.6M 14% 50% 21% 28% 66% 100% 80% 100% Chinese LAION-zh In-house Data 108M 220M 105M 220M 97% 100% Total 5B 1.4B 28%
2308.12966#6
2308.12966#8
2308.12966
[ "2211.01335" ]
2308.12966#8
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
We freeze the large language model and only optimize the vision encoder and VL adapter in this stage. The input images are resized to 224 Ã 224. The training objective is to minimize the cross-entropy of the text tokens. The maximum learning rate is 2eâ 4 and the training process uses a batch size of 30720 for the image-text pairs, and the entire first stage of pre-training lasts for 50,000 steps, consuming approximately 1.5 billion image-text samples. More hyperparameters are detailed in Appendix C and the convergence curve of this stage is shown in Figure 6. # 3.2 Multi-task Pre-training In the second stage of multi-task pre-training, we introduce high-quality and fine-grained VL annotation data with a larger input resolution and interleaved image-text data. As summarized in Table 3, we trained Qwen-VL on 7 tasks simultaneously. For text generation, we use the in-house collected corpus to maintain the LLMâ s ability. Captioning data is the same with Table 2 except for far fewer samples and excluding LAION-COCO. We use a mixture of publicly available data for the VQA task which includes GQA (Hudson and Manning, 2019), VGQA (Krishna et al., 2017), VQAv2 (Goyal et al., 2017), DVQA (Kafle et al., 2018), OCR- VQA (Mishra et al., 2019) and DocVQA (Mathew et al., 2021). We follow Kosmos-2 to use the GRIT (Peng et al., 2023) dataset for the grounding task with minor modifications. For the reference grounding and grounded captioning duality tasks, we construct training samples from GRIT (Peng et al., 2023), Visual Genome (Krishna et al., 2017), RefCOCO (Kazemzadeh et al., 2014), RefCOCO+, and RefCOCOg (Mao et al.,
2308.12966#7
2308.12966#9
2308.12966
[ "2211.01335" ]
2308.12966#9
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
5 2016). In order to improve the text-oriented tasks, we collect pdf and HTML format data from Common Crawl1 and generate synthetic OCR data in English and Chinese language with natural scenery background, following (Kim et al., 2022). Finally, we simply construct interleaved image-text data by packing the same task data into sequences of length 2048. # Table 3: Details of Qwen-VL multi-task pre-training data. Task # Samples Dataset Captioning VQA Grounding2 Ref Grounding Grounded Cap. OCR Pure-text Autoregression 19.7M 3.6M 3.5M 8.7M 8.7M 24.8M 7.8M LAION-en & zh, DataComp, Coyo, CC12M & 3M, SBU, COCO, In-house Data GQA, VGQA, VQAv2, DVQA, OCR-VQA, DocVQA, TextVQA, ChartQA, AI2D GRIT GRIT, Visual Genome, RefCOCO, RefCOCO+, RefCOCOg GRIT, Visual Genome, RefCOCO, RefCOCO+, RefCOCOg SynthDoG-en & zh, Common Crawl pdf & HTML In-house Data We increase the input resolution of the visual encoder from 224 Ã 224 to 448 Ã 448, reducing the information loss caused by image down-sampling. Besides, we ablate the window attention and global attention for higher resolutions of the vision transformer in Appendix E.3. We unlocked the large language model and trained the whole model. The training objective is the same as the pre-training stage. # 3.3 Supervised Fine-tuning During this stage, we finetuned the Qwen-VL pre-trained model through instruction fine-tuning to enhance its instruction following and dialogue capabilities, resulting in the interactive Qwen-VL-Chat model. The multi-modal instruction tuning data primarily comes from caption data or dialogue data generated through LLM self-instruction, which often only addresses single-image dialogue and reasoning and is limited to image content comprehension. We construct an additional set of dialogue data through manual annotation, model generation, and strategy concatenation to incorporate localization and multi-image comprehension abilities into the Qwen-VL model. We confirm that the model effectively transfers these capabilities to a wider range of languages and question types.
2308.12966#8
2308.12966#10
2308.12966
[ "2211.01335" ]
2308.12966#10
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Additionally, we mix multi-modal and pure text dialogue data during training to ensure the modelâ s universality in dialogue capabilities. The instruction tuning data amounts to 350k. In this stage, we freeze the visual encoder and optimize the language model and adapter module. We demonstrate the data format of this stage in Appendix B.2. # 4 Evaluation In this section, we conduct an overall evaluation on various multi-modal tasks to comprehensively assess our modelsâ visual understanding ability. In the following, Qwen-VL denotes the model after the multi-task training, and Qwen-VL-Chat denotes the model after supervised fine-tuning (SFT) stage. Table 9 provides a detailed summary of the used evaluation benchmarks and corresponding metrics. # Image Caption and General Visual Question Answering Image caption and general visual question answering (VQA) are two conventional tasks for vision-language models. Specifically, image caption requires the model to generate a description for a given image and general VQA requires the model to generate an answer for a given image-question pair. 1 https://digitalcorpora.org/corpora/file-corpora/cc-main-2021-31-pdf-untruncated 2This task is to generate noun/phrase grounded captions (Peng et al., 2023).
2308.12966#9
2308.12966#11
2308.12966
[ "2211.01335" ]
2308.12966#11
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
6 # Table 4: Results on Image Captioning and General VQA. Model Type Model Image Caption Nocaps (0-shot) Flickr30K (0-shot) VQAv2 OKVQA General VQA GQA SciQA-Img (0-shot) VizWiz (0-shot) Generalist Models Flamingo-9B Flamingo-80B Unified-IO-XL Kosmos-1 Kosmos-2 BLIP-2 (Vicuna-13B) InstructBLIP (Vicuna-13B) Shikra (Vicuna-13B) Qwen-VL (Qwen-7B) Qwen-VL-Chat - - 100.0 - - 103.9 121.9 - 121.4 120.2 61.5 67.2 - 67.1 80.5 71.6 82.8 73.9 85.8 81.0 51.8 56.3 77.9 51.0 51.1 65.0 - 77.36 79.5 78.2 44.7 50.6 54.0 - - 45.9 - 47.16 58.6 56.6 - - - - - 32.3 49.5 - 59.3 57.5 - - - - - 61.0 63.1 - 67.1 68.2 28.8 31.6 - 29.2 - 19.6 33.4 - 35.2 38.9 Specialist SOTAs - 127.0 (PALI-17B) 84.5 (InstructBLIP -FlanT5-XL) 86.1 (PALI-X -55B) 66.1 (PALI-X -55B) 72.1 (CFR) 92.53 (LLaVa+ GPT-4) 70.9 (PALI-X -55B) For the image caption task, we choose Nocaps (Agrawal et al., 2019) and Flickr30K (Young et al., 2014) as benchmarks and report CIDEr score (Vedantam et al., 2015) as metric.
2308.12966#10
2308.12966#12
2308.12966
[ "2211.01335" ]
2308.12966#12
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
We utilize greedy search for caption generation with a prompt of "Descripe the image in English:". For general VQA, we utilize five benchmarks including VQAv2 (Goyal et al., 2017), OKVQA (Marino et al., 2019), GQA (Hudson and Manning, 2019), ScienceQA (Image Set) (Lu et al., 2022b) and VizWiz VQA (Gurari et al., 2018). For VQAv2, OKVQA, GQA and VizWiz VQA, we employ open-ended answer generation with greedy decoding strategy and a prompt of "{question} Answer:", without any constrain on modelâ s output space. However, for ScienceQA, we constrain the modelâ s output to possible options (instead of open-ended), choose the option with highest confidence as modelâ s prediction, and report the Top-1 accuracy. The overall performance on image caption and general VQA tasks are reported in Table 4. As the results shown, our Qwen-VL and Qwen-VL-Chat both achieve obviously better results compared to previous generalist models in terms of both two tasks. Specifically, on zero-shot image caption task, Qwen-VL achieves state-of-the-art performance (i.e., 85.8 CIDEr score) on the Flickr30K karpathy-test split, even outperforms previous generalist models with much more parameters (e.g., Flamingo-80B with 80B parameters). On general VQA benchmarks, our models also exhibit distinct advantages compared to others. On VQAv2, OKVQA and GQA benchmarks, Qwen-VL achieves 79.5, 58.6 and 59.3 accuracy respectively, which surpasses recent proposed LVLMs by a large margin.
2308.12966#11
2308.12966#13
2308.12966
[ "2211.01335" ]
2308.12966#13
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Itâ s worth noting that Qwen-VL also shows strong zero-shot performance on ScienceQA and VizWiz datasets. # 4.2 Text-oriented Visual Question Answering Text-oriented visual understanding has a broad application prospect in real-world scenarios. We assess our modelsâ ability toward text-oriented visual question answering on several benchmarks including TextVQA (Sidorov et al., 2020), DocVQA (Mathew et al., 2021), ChartQA (Masry et al., 2022), AI2Diagram (Kembhavi et al., 2016), and OCR-VQA (Mishra et al., 2019). Similarly, the results are shown in Table 5. Compared to previous generalist models and recent LVLMs, our models show better performance on most benchmarks, frequently by a large margin. # 4.3 Refer Expression Comprehension We show our modelsâ fine-grained image understanding and localization ability by evaluating on a sort of refer expression comprehension benchmarks such as RefCOCO (Kazemzadeh et al., 2014), RefCOCOg (Mao et al., 2016), RefCOCO+ (Mao et al., 2016) and GRIT (Gupta et al., 2022). Specifically, the refer expression comprehension task requires the model to localize the target object under the guidance of a description.
2308.12966#12
2308.12966#14
2308.12966
[ "2211.01335" ]
2308.12966#14
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
The 7 # Table 5: Results on Text-oriented VQA. Model BLIP-2 (Vicuna-13B) InstructBLIP (Vicuna-13B) mPLUG-DocOwl (LLaMA-7B) Pix2Struct-Large (1.3B) Qwen-VL (Qwen-7B) Qwen-VL-Chat 42.4 50.7 52.6 - 63.8 61.5 - - 62.2 76.6 65.1 62.6 - - 57.4 58.6 65.7 66.3 - - - 42.1 62.3 57.7 - - - 71.3 75.7 70.5 PALI-X-55B (Single-task fine- tuning, without OCR Pipeline) 71.44 80.0 70.0 81.2 75.0 # TextVQA DocVQA ChartQA AI2D OCR-VQA # Table 6: Results on Referring Expression Comprehension task.
2308.12966#13
2308.12966#15
2308.12966
[ "2211.01335" ]
2308.12966#15
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Model type Model val RefCOCO test-A test-B val RefCOCO+ test-A test-B val Generalist Models Specialist SOTAs GPV-2 OFA-L* Unified-IO VisionLLM-H Shikra-7B Shikra-13B - - 79.96 83.67 - 86.70 87.01 90.61 87.83 91.11 89.36 92.26 Qwen-VL-7B Qwen-VL-7B-Chat 88.55 92.27 90.56 93.19 G-DINO-L 92.64 94.33 UNINEXT-H 92.58 94.18 ONE-PEACE - - 76.39 68.29 76.00 - - 80.24 81.60 87.36 81.81 82.89 87.79 85.34 83.12 88.25 84.51 82.82 88.59 88.24 82.75 88.95 91.46 85.24 89.63 89.26 88.77 92.21 - - - - - - - 61.75 67.57 67.58 - - 72.12 82.27 82.19 74.41 82.64 83.16 77.21 85.58 85.48 76.79 85.96 86.32 75.92 86.13 87.02 79.79 88.73 89.37 83.23 89.22 89.27 - - - - - - 51.50 61.70 78.61 - 69.34 69.03 78.22 - - - - results are shown in Table 6. Compared to previous generalist models or recent LVLMs, our models obtain top-tier results on all benchmarks. # 4.4 Few-shot Learning on Vision-Language Tasks Our model also exhibits satisfactory in-context learning (a.k.a., few-shot learning) ability.
2308.12966#14
2308.12966#16
2308.12966
[ "2211.01335" ]
2308.12966#16
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
As shown in Figure 4, Qwen-VL achieves better performance through in-context few-shot learning on OKVQA (Marino et al., 2019), Vizwiz (Gurari et al., 2018), TextVQA (Sidorov et al., 2020), and Flickr30k (Young et al., 2014) when compared with models with similar number of parameters (Flamingo-9B(Alayrac et al., 2022), OpenFlamingo-9B(?) and IDEFICS-9B?). Qwen-VLâ s performance is even comparable with much larger models (Flamingo-80B and IDEFICS-80B). Note that we adopt naïve random sample to construct the few-shot exemplars, sophisticated few-shot exemplar construction methods such as RICES (Yang et al., 2022b) are not used despite better results would be achieved.
2308.12966#15
2308.12966#17
2308.12966
[ "2211.01335" ]
2308.12966#17
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
20 65 60 38 45 TextvQa 3 Se Qnenvt â e- Flamingo-808 â e DEFICS-808 â e Flamingo-93 â opentlaminga-98 eH DEFICS-98 \) ° 4 Figure 4: Few-shot learning results of Qwen-VL in comparison with other models. 8 # Table 7: Results on Instruction-following benchmarks. Model TouchStone Cn En All SEED-Bench Img Video MME Perception Cognition VisualGLM PandaGPT MiniGPT4 InstructBLIP LLaMA-AdapterV2 LLaVA mPLUG-Owl - 488.5 531.7 552.4 590.1 602.7 605.4 247.1 - - - - - - - - 42.8 53.4 32.7 33.5 34.0 - - 47.4 58.8 35.2 37.0 37.9 - - 29.9 38.1 25.8 23.8 23.0 705.31 642.59 581.67 1212.82 972.67 502.82 967.34 181.79 228.57 144.29 291.79 248.93 214.64 276.07 Qwen-VL Qwen-VL-Chat - 645.2 - 401.2 56.3 58.2 62.3 65.4 39.1 37.8 - 1487.58 - 360.71 # Instruction Following in Real-world User Behavior In addition to previous conventional vision-language evaluations, to evaluate our Qwen-VL-Chat modelâ s capacity under real-world user behavior, we further conduct the evaluations on the TouchStone (Bai et al., 2023), SEED-Bench (Li et al., 2023b), and MME (Fu et al., 2023). TouchStone is an open-ended vision- language instruction-following benchmark. We compare the instruction-following ability of Qwen-VL-Chat with other instruction-tuned LVLMs in both English and Chinese on the TouchStone benchmark.
2308.12966#16
2308.12966#18
2308.12966
[ "2211.01335" ]
2308.12966#18
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
SEED-Bench consists of 19K multiple-choice questions with accurate human annotations for evaluating Multimodal LLMs, covering 12 evaluation dimensions including both the spatial and temporal understanding. MME measures both perception and cognition abilities on a total of 14 subtasks. The results on three benchmarks are shown in Table 7. Qwen-VL-Chat has achieved obvious advantages over other LVLMs on all three datasets, indicating that our model performs better in understanding and answering diverse user instructions. In SEED-Bench, we have found that our modelâ s visual capabilities can be effectively transferred to video tasks by simply sampling four frames. In terms of the overall scores presented in TouchStone, our model demonstrates a clear advantage compared to other LVLMs, especially in terms of its Chinese capabilities. In terms of the broad categories of abilities, our model exhibits a more pronounced advantage in understanding and recognition, particularly in areas such as text recognition and chart analysis. For more detailed information, please refer to the TouchStone dataset. # 5 Related Work In recent years, researchers have shown considerable interest in vision-language learning (Su et al., 2019; Chen et al., 2020; Li et al., 2020; Zhang et al., 2021; Li et al., 2021b; Lin et al., 2021; Kim et al., 2021; Dou et al., 2022; Zeng et al., 2021; Li et al., 2021a, 2022), especially in the development of multi-task generalist models (Hu and Singh, 2021; Singh et al., 2022; Zhu et al., 2022; Yu et al., 2022; Wang et al., 2022a; Lu et al., 2022a; Bai et al., 2022). CoCa (Yu et al., 2022) proposes an encoder-decoder structure to address image-text retrieval and vision-language generation tasks simultaneously. OFA (Wang et al., 2022a) transforms specific vision-language tasks into sequence-to-sequence tasks using customized task instructions. Unified I/O (Lu et al., 2022a) further introduces more tasks like segmentation and depth estimation into a unified framework.
2308.12966#17
2308.12966#19
2308.12966
[ "2211.01335" ]
2308.12966#19
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Another category of research focuses on building vision-language representation models (Radford et al., 2021; Jia et al., 2021; Zhai et al., 2022; Yuan et al., 2021; Yang et al., 2022a). CLIP (Radford et al., 2021) leverages contrastive learning and large amounts of data to align images and language in a semantic space, resulting in strong generalization capabilities across a wide range of downstream tasks. BEIT-3 (Wang et al., 2022b) employs a mixture-of-experts (MOE) structure and unified masked token prediction objective, achieving state-of-the-art results on various visual-language tasks. In addition to vision-language learning, ImageBind (Girdhar et al., 2023) and ONE-PEACE (Wang et al., 2023) align more modalities such as speech into a unified semantic space, thus creating more general representation models. Despite achieving significant progress, previous vision-language models still have several limitations such
2308.12966#18
2308.12966#20
2308.12966
[ "2211.01335" ]
2308.12966#20
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
9 as poor robustness in instruction following, limited generalization capabilities in unseen tasks, and a lack of in-context abilities. With the rapid development of large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023; Gao et al., 2023; Qwen, 2023), researchers have started building more powerful large vision-language models (LVLMs) based on LLMs (Alayrac et al., 2022; Chen et al., 2022; Li et al., 2023c; Dai et al., 2023; Huang et al., 2023; Peng et al., 2023; Zhu et al., 2023; Liu et al., 2023; Ye et al., 2023b,a; Chen et al., 2023a; Li et al., 2023a; Zhang et al., 2023; Sun et al., 2023). BLIP-2 (Li et al., 2023c) proposes Q-Former to align the frozen vision foundation models and LLMs. Meanwhile, LLAVA (Liu et al., 2023) and Mini- GPT4 (Zhu et al., 2023) introduce visual instruction tuning to enhance instruction following capabilities in LVLMs. Additionally, mPLUG-DocOwl (Ye et al., 2023a) incorporates document understanding capabilities into LVLMs by introducing digital documents data. Kosmos2 (Peng et al., 2023), Shikra (Chen et al., 2023a), and BuboGPT (Zhao et al., 2023) further enhance LVLMs with visual grounding abilities, enabling region description and localization. In this work, we integrate image captioning, visual question answering, OCR, document understanding, and visual grounding capabilities into Qwen-VL. The resulting model achieves outstanding performance on these diverse style tasks. # 6 Conclusion and Future Work We release the Qwen-VL series, a set of large-scale multilingual vision-language models that aims to facili- tate multimodal research. Qwen-VL outperforms similar models across various benchmarks, supporting multilingual conversations, multi-image interleaved conversations, grounding in Chinese, and fine-grained recognition.
2308.12966#19
2308.12966#21
2308.12966
[ "2211.01335" ]
2308.12966#21
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Moving forward, we are dedicated to further enhancing Qwen-VLâ s capabilities in several key dimensions: â ¢ Integrating Qwen-VL with more modalities, such as speech and video. â ¢ Augmenting Qwen-VL by scaling up the model size, training data and higher resolution, enabling it to handle more complex and intricate relationships within multimodal data. â ¢ Expanding Qwen-VLâ s prowess in multi-modal generation, specifically in generating high-fidelity images and fluent speech. # References Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. nocaps: novel object captioning at scale. In ICCV, 2019. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In NeurIPS, 2022. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv:2305.10403, 2023. Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang, et al. Ofasys: A multi-modal multi-task learning system for building generalist models. arXiv:2212.04408, 2022. Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, and Jingren Zhou.
2308.12966#20
2308.12966#22
2308.12966
[ "2211.01335" ]
2308.12966#22
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Touchstone: Evaluating vision-language models by language models. arXiv:2308.16890, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020. 10 Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m:
2308.12966#21
2308.12966#23
2308.12966
[ "2211.01335" ]
2308.12966#23
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Image-text pair dataset, 2022. URL https://github.com/kakaobrain/coyo-dataset. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In CVPR, 2021. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao.
2308.12966#22
2308.12966#24
2308.12966
[ "2211.01335" ]
2308.12966#24
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Shikra: Unleashing multimodal llmâ s referential dialogue magic. arXiv:2306.15195, 2023a. Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv:2209.06794, 2022. Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, et al. Pali-x: On scaling up a multilingual vision and language model. arXiv preprint arXiv:2305.18565, 2023b. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv:1504.00325, 2015. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In ECCV, 2020. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv:2305.06500, 2023.
2308.12966#23
2308.12966#25
2308.12966
[ "2211.01335" ]
2308.12966#25
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Un- terthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. Zi-Yi* Dou, Aishwarya* Kamath, Zhe* Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang. Coarse-to-fine vision-language pre-training with fusion in the backbone. In NeurIPS, 2022. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv:2306.13394, 2023. Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. arXiv:2304.14108, 2023. Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al.
2308.12966#24
2308.12966#26
2308.12966
[ "2211.01335" ]
2308.12966#26
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Llama-adapter v2: Parameter-efficient visual instruction model. arXiv:2304.15010, 2023. Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In CVPR, 2023. Google. Puppeteer, 2023. URL https://github.com/puppeteer/puppeteer.
2308.12966#25
2308.12966#27
2308.12966
[ "2211.01335" ]
2308.12966#27
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017. Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem. Grit: General robust image task benchmark. arXiv:2204.13653, 2022. Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In CVPR, 2018. 11 Ronghang Hu and Amanpreet Singh. Unit: Multimodal multitask learning with a unified transformer. In ICCV, 2021. Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al.
2308.12966#26
2308.12966#28
2308.12966
[ "2211.01335" ]
2308.12966#28
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Language is not all you need: Aligning perception with language models. arXiv:2302.14045, 2023. Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019. Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. Openclip, 2021. URL https://doi.org/10.5281/zenodo.5143773. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. arXiv:2102.05918, 2021. Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizations via question answering. In CVPR, 2018. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame:
2308.12966#27
2308.12966#29
2308.12966
[ "2211.01335" ]
2308.12966#29
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Referring to objects in photographs of natural scenes. In EMNLP, 2014. Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In ECCV, 2016. Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In ECCV, 2022. Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In ICML, 2021. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. In IJCV, 2017. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu.
2308.12966#28
2308.12966#30
2308.12966
[ "2211.01335" ]
2308.12966#30
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Otter: A multi-modal model with in-context instruction tuning. arXiv:2305.03726, 2023a. Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv:2307.16125, 2023b. Junnan Li, Ramprasaath R Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, 2021a. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. Blip:
2308.12966#29
2308.12966#31
2308.12966
[ "2211.01335" ]
2308.12966#31
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, 2022. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv:2301.12597, 2023c. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. UNIMO: towards unified-modal understanding and generation via cross-modal contrastive learning. In ACL, 2021b. Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, 2020.
2308.12966#30
2308.12966#32
2308.12966
[ "2211.01335" ]
2308.12966#32
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
12 Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, et al. M6: A chinese multimodal pretrainer. In KDD, 2021. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv:2304.08485, 2023. Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. Unified-io: A unified model for vision, language, and multi-modal tasks. arXiv:2206.08916, 2022a. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In NeurIPS, 2022b. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Gener- ation and comprehension of unambiguous object descriptions. In CVPR, 2016. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In CVPR, 2019. Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv:2203.10244, 2022. Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In WACV, 2021.
2308.12966#31
2308.12966#33
2308.12966
[ "2211.01335" ]
2308.12966#33
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. Ocr-vqa: Visual question answering by reading text in images. In ICDAR, 2019. Openai. Chatml documents. URL https://github.com/openai/openai-python/blob/main/chatml.md. OpenAI. Gpt-4 technical report, 2023. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS, 2011. Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei.
2308.12966#32
2308.12966#34
2308.12966
[ "2211.01335" ]
2308.12966#34
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Kosmos-2: Grounding multimodal large language models to the world. arXiv:2306.14824, 2023. Qwen. Introducing qwen-7b: Open foundation and human-aligned models (of the state-of-the-arts), 2023. URL https://github.com/QwenLM/Qwen-7B. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv:2210.08402, 2022a.
2308.12966#33
2308.12966#35
2308.12966
[ "2211.01335" ]
2308.12966#35
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Christoph Schuhmann, Andreas Köpf, Richard Vencu, Theo Coombes, and Romain Beaumont. Laion coco: 600m synthetic captions from laion2b-en. https://laion.ai/blog/laion-coco/, 2022b. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hyper- nymed, image alt-text dataset for automatic image captioning. In ACL, 2018. Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In ECCV, 2020.
2308.12966#34
2308.12966#36
2308.12966
[ "2211.01335" ]
2308.12966#36
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
13 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In CVPR, 2022. Artifex Software. Pymupdf, 2015. URL https://github.com/pymupdf/PyMuPDF. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai.
2308.12966#35
2308.12966#37
2308.12966
[ "2211.01335" ]
2308.12966#37
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Vl-bert: Pre-training of generic visual-linguistic representations. In ICLR, 2019. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv:2307.05222, 2023. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh.
2308.12966#36
2308.12966#38
2308.12966
[ "2211.01335" ]
2308.12966#38
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Cider: Consensus-based image description evaluation. In CVPR, 2015. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to- sequence learning framework. In ICML, 2022a. Peng Wang, Shijie Wang, Junyang Lin, Shuai Bai, Xiaohuan Zhou, Jingren Zhou, Xinggang Wang, and Chang Zhou.
2308.12966#37
2308.12966#39
2308.12966
[ "2211.01335" ]
2308.12966#39
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
One-peace: Exploring one general representation model toward unlimited modalities. arXiv:2305.11172, 2023. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. arXiv:2208.10442, 2022b. An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, and Chang Zhou. Chinese clip: Contrastive vision-language pretraining in chinese. arXiv:2211.01335, 2022a. Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In AAAI, 2022b. Jiabo Ye, Anwen Hu, Haiyang Xu, Qinghao Ye, Ming Yan, Yuhao Dan, Chenlin Zhao, Guohai Xu, Chenliang Li, Junfeng Tian, et al. mplug-docowl: Modularized multimodal large language model for document understanding. arXiv:2307.02499, 2023a. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv:2304.14178, 2023b. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. In ACL, 2014. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu.
2308.12966#38
2308.12966#40
2308.12966
[ "2211.01335" ]
2308.12966#40
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Coca: Contrastive captioners are image-text foundation models. arXiv:2205.01917, 2022. Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel C. F. Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. Florence: A new foundation model for computer vision. arXiv:2111.11432, 2021. Yan Zeng, Xinsong Zhang, and Hang Li. Multi-grained vision language pre-training: Aligning texts with visual concepts. arXiv:2111.08276, 2021. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In CVPR, 2022. Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv:2306.02858, 2023.
2308.12966#39
2308.12966#41
2308.12966
[ "2211.01335" ]
2308.12966#41
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
14 Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. In CVPR, 2021. Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, and Bingyi Kang. Bubogpt: Enabling visual grounding in multi-modal llms. arXiv:2307.08581, 2023. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision- language understanding with advanced large language models. arXiv:2304.10592, 2023. Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Hongsheng Li, Xiaohua Wang, and Jifeng Dai. Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. In CVPR, 2022.
2308.12966#40
2308.12966#42
2308.12966
[ "2211.01335" ]
2308.12966#42
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
15 # A Dataset details # A.1 Image-text pairs We use web-crawled image-text pairs dataset for pre-training, which includes LAION-en (Schuhmann et al., 2022a), LAION-zh (Schuhmann et al., 2022a), LAION-COCO (Schuhmann et al., 2022b), DataComp (Gadre et al., 2023) and Coyo (Byeon et al., 2022). We clean these noisy data by several steps: 1. Removing pairs with too large aspect ratio of the image 2. Removing pairs with too small image 3. Removing pairs with a harsh CLIP score (dataset-specific) 4. Removing pairs with text containing non-English or non-Chinese characters 5. Removing pairs with text containing emoji characters 6. Removing pairs with text length too short or too long 7. Cleaning the textâ s HTML-tagged part 8. Cleaning the text with certain unregular patterns For academic caption datasets, we remove pairs whose text contains the special tags in CC12M (Changpinyo et al., 2021) and SBU (Ordonez et al., 2011). If there is more than one text matching the same image, we select the longest one. # A.2 VQA For the VQAv2 (Goyal et al., 2017) dataset, we select the answer annotation based on the maximum confidence. For other VQA datasets, we didnâ
2308.12966#41
2308.12966#43
2308.12966
[ "2211.01335" ]
2308.12966#43
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
t do anything special. # A.3 Grounding For the GRIT (Peng et al., 2023) dataset, we found that there are many recursive grounding box labels in one caption. We use the greedy algorithm to clean the caption to make sure each image contains the most box labels with no recursive box labels. For other grounding datasets, we simply concatenate the noun/phrase with respective bounding box coordinates. # A.4 OCR We generated the synthetic OCR dataset using Synthdog (Kim et al., 2022). Specifically, we use the COCO (Lin et al., 2014) train2017 and unlabeld2017 dataset split as the natural scenery background. Then we selected 41 English fonts and 11 Chinese fonts to generate text. We use the default hyperparameters as in Synthdog. We track the generated text locations in the image and convert them to quadrilateral coordinates and we also use these coordinates as training labels. The visualization example is illustrated in the second row of Fig 5. For all the PDF data we collected, we follow the steps below to pre-process the data using PyMuPDF (Software, 2015) to get the rendering results of each page in a PDF file as well as all the text annotations with their bounding boxes. 1. Extracting all texts and their bounding boxes for each page. 16
2308.12966#42
2308.12966#44
2308.12966
[ "2211.01335" ]
2308.12966#44
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
2014 Real Estate Taxes TS Real Estate Taxes OTe Real Estate Taxes O17 foal estate Tax fertecetsar e Comeany reson ene MEE Figure 5: Visualization of the Grounding and OCR data used for training Qwen-VL 17 2. Rendering each page and save them as an image file. 3. Removing too small image. 4. Removing images with too many or too few characters. 5. Removing images containing Unicode characters in the â Latin Extended-Aâ and â Latin Extended-Bâ blocks. 6. Removing images containing Unicode characters in the â Private Use Area (PUA)â block. For all HTML web pages we collected, we pre-process them in a similar approach to all the PDF data we collected, but we use Puppeteer (Google, 2023) instead of PyMuPDF to render these HTML pages and get the ground truth annotation. We follow the steps below to pre-process the data. 1. Extracting all texts for each webpage. 2. Rendering each page and save them as an image file. 3. Removing too small image. 4. Removing images with too many or too few characters. 5. Removing images containing Unicode characters in the â Private Use Area (PUA)â block.
2308.12966#43
2308.12966#45
2308.12966
[ "2211.01335" ]
2308.12966#45
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
# B Data Format Details of Training # B.1 Data Format of Multi-Task Pre-training We visualize the Multi-Task Pre-training data format in Box B.1. The Box contains all 7 tasks with the black-colored text as the prefix sequence without loss and blue-colored text as the ground truth labels with loss. 18 Image Captioning <img>cc3m/01581435.jpg</img>Generate the caption in English: design.<eos> the beautiful flowers for Vision Question Answering <img>VG_100K_2/1.jpg</img> Does the bandage have a different color than the wrist band? Answer: No, both the bandage and the wrist band are white.<eos> OCR VQA <img>ocr_vqa/1.jpg</img> What is the title of this book? Answer: Asi Se Dice!, Volume 2: Work- book And Audio Activities (Glencoe Spanish) (Spanish Edition)<eos> Caption with Grounding <img>coyo700m/1.jpg</img>Generate the caption in English with grounding: Beautiful shot of <ref>bees</ref><box>(661,612),(833,812)</box><box>(120,555),(265,770) </box> gathering nectars from <ref>an apricot flower</ref><box>(224,13),(399,313) </box><eos> Referring Grounding <img>VG_100K_2/3.jpg</img><ref>the ear on a giraffe</ref><box>(176,106),(232,160) </box><eos> Grounded Captioning <img>VG_100K_2/4.jpg</img><ref>This</ref><box>(360,542),(476,705)</box> is Yellow cross country ski racing gloves<eos> OCR <img>synthdog/1.jpg</img>OCR with grounding: <ref>It is managed</ref> <quad> (568,121), (625,131), (624,182), (567,172)</quad>...<eos> # B.2 Data Format of Supervised Fine-tuning To better accommodate multi-image dialogue and multiple image inputs, we add the string "Picture id:" before different images, where the id corresponds to the order of image input dialogue.
2308.12966#44
2308.12966#46
2308.12966
[ "2211.01335" ]
2308.12966#46
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
In terms of dialogue format, we construct our instruction tuning dataset using the ChatML (Openai) format, where each interactionâ s statement is marked with two special tokens (<im_start> and <im_end>) to facilitate dialogue termination. The Dataset Format Example of ChatML <im_start>user Picture 1: <img>vg/VG_100K_2/649.jpg</img>What is the sign in the picture?<im_end> <im_start>assistant The sign is a road closure with an orange rhombus.<im_end> <im_start>user How is the weather in the picture?<im_end> <im_start>assistant The shape of the road closure sign is an orange rhombus.<im_end> During training, we ensure the consistency between prediction and training distributions by only supervising answers and special tokens (blue in the example), and not supervising role names or question prompts.
2308.12966#45
2308.12966#47
2308.12966
[ "2211.01335" ]
2308.12966#47
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
19 # C Hyperparameters We report the detailed training hyperparameter settings of Qwen-VL in Table 8. # Table 8: Training hyperparameters of Qwen-VL Configuration Pre-training Multi-task Pre-training Supervised Fine-tuning ViT init. Open-CLIP-bigG Qwen-VL 1st-stage Qwen-VL 2nd-stage LLM init. Qwen-7B Qwen-7B Qwen-VL 2nd-stage VL Adapter init. random Qwen-VL 1st-stage Qwen-VL 2nd-stage Image resolution ViT sequence length 2242 256 4482 1024 4482 1024 LLM sequence length 512 2048 2048 Learnable query numbers 256 256 256 Optimizer Optimizer hyperparameter AdamW β1 = 0.9, β2 = 0.98, eps = 1eâ 6 Peak learning rate Minimum learning rate ViT learning rate decay 2eâ 4 1eâ 6 0.95 5eâ 5 1eâ 5 0.95 1eâ 5 1eâ 6 0 ViT Drop path rate 0 Learning rate schedule cosine decay Weight decay 0.05 Gradient clip 1.0 Training steps 50k 19k 8k Warm-up steps 500 400 3k Global batch size 30720 4096 128 Gradient Acc. 6 8 8 Numerical precision Optimizer sharding bfloat16 â Activation checkpointing â Model parallelism â 2 2 Pipeline parallelism â In the first pre-training stage, the model is trained using AdamW optimizer with β1 = 0.9, β2 = 0.98, eps = 1eâ 6. We use the cosine learning rate schedule and set the maximum learning rate of 2eâ 4 and minimum of 1eâ 6 with a linear warm-up of 500 steps. We use a weight decay of 5eâ 2 and a gradient clipping of 1.0. For the ViT image encoder, we apply a layer-wise learning rate decay strategy with a decay factor of 0.95.
2308.12966#46
2308.12966#48
2308.12966
[ "2211.01335" ]
2308.12966#48
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
The training process uses a batch size of 30720 for the image-text pairs, and the entire first stage of pre-training lasts for 50,000 steps, consuming approximately 1.5 billion image-text samples and 500 billion image-text tokens. In the second multi-task training stage, we increase the input resolution of the visual encoder from 224 à 224 to 448 à 448, reducing the information loss caused by image down-sampling. We unlocked the large language model and trained the whole model. The training objective is the same as the pre-training stage. We use AdamW optimizer with β1 = 0.9, β2 = 0.98, eps = 1eâ 6. We trained for 19000 steps with 400 warm-up steps and a cosine learning rate schedule. Specifically, we use the model parallelism techniques for ViT and LLM. # D Summary of the evaluation benchmarks We provide a detailed summary of the used evaluation benchmarks and corresponding metrics in Table 9.
2308.12966#47
2308.12966#49
2308.12966
[ "2211.01335" ]
2308.12966#49
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
20 # Table 9: Summary of the evaluation benchmarks. Task Dataset Description Split Metric Image Caption Nocaps Flickr30K Captioning of natural images Captioning of natural images val karpathy-test CIDEr(â ) CIDEr(â ) General VQA VQAv2 OKVQA GQA ScienceQA-Img Multi-choice VQA on a diverse set of science topics VizWiz VQA on natural images VQA on natural images requiring outside knowledge val VQA on scene understanding and reasoning VQA on photos taken by people who are blind test-dev test-balanced test test-dev VQA Score(â ) VQA Score(â ) EM(â ) Accuracy(â ) VQA Score(â ) Text-oriented VQA TextVQA DocVQA ChartQA OCRVQA AI2Diagram VQA on natural images containing text VQA on images of scanned documents VQA on images of charts VQA on images of book covers VQA on images of scientific diagrams val test test test test VQA Score(â ) ANLS(â ) Relaxed EM(â ) EM(â ) EM(â ) Refer Expression Comprehension RefCOCO RefCOCO+ RefCOCOg GRiT Refer grounding on natural images Refer grounding on natural images Refer grounding on natural images Refer grounding on natural images val & testA & testB val & testA & testB val & test test Accuracy(â ) Accuracy(â ) Accuracy(â ) Accuracy(â ) Instruction Following TouchStone MME Seed-Bench Open-ended VL instruction following benchmark Open-ended VL Benchmark by yes/no questions Open-ended VL Benchmark by Multi-choice VQA English & Chinese Perception & Cognition Accuracy (â ) Accuracy (â ) Image & Video GPT-4 Score (â ) # E Additional experimental details # E.1 Convergence of the Pre-training Stage In Figure 6, we show the convergence of the Pre-training Stage (stage one). The whole models are trained using BFloat16 mixed precision, the batch size is 30720, and the learning rate is 2eâ
2308.12966#48
2308.12966#50
2308.12966
[ "2211.01335" ]
2308.12966#50
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
4. All images are only trained once (one epoch). The training loss decreases steadily with the increase of the number of training pictures. Note that, the pre-training stage (Stage one) has no VQA data being added, but the Zero-shot VQA score increases amidst fluctuations. 3.0 76 56 2.8 74 3a 26 n 24 70 = g 50 20 66 64 1s 4s 62 16 00 02 04 06 08 10 12 14 16 00 02 04 06 o8 10 12 14 16 00 02 04 06 o8 10 12 14 16 â nmages(®) â Hmages(®) â Hmages(®) a. Pre-training Loss b. Caption (Flickr) c. Zero-shot VQA (VQAv2) # Figure 6: Visualization of the Convergence of the Pre-training Stage # E.2 Number of Learnable Queries in the Vision-Language Adapter The vision-language adapter uses cross-attention to compress the visual feature sequence by a set of learning queries of length. Too few queries can lead to the loss of some visual information, while too many queries may reduce in greater convergence difficulty and computational cost. An ablation experiment is conducted on the number of learnable queries in the vision-language adapter.
2308.12966#49
2308.12966#51
2308.12966
[ "2211.01335" ]
2308.12966#51
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
We 21 n 2.70 pe L64 2.65 2.60 8 os Loss 2.50 2.45 3 2.40 10 20 30 40 50 1000 1500 2000 2500 3000 3500 4000 4500 Steps Steps Figure 7: Visualization of the training loss when using different compressed feature lengths of the vision- language adapter. The left depicts the initial training loss (within 50 steps), and the right depicts the loss in convergence (1k-5k steps). In the legend, L64 denotes that the adapter uses 64 queries to compress the visual feature sequence to a fixed length of 64, and so on. The loss curves have been smoothed to avoid shading owing to fluctuations. used ViT-L/14 as the visual encoder and the 224 Ã 224 resolution picture as input, so the sequence length of ViTâ s output is (224/14)2 = 256. As shown in the left part of Figure 7, the fewer queries used at the beginning of training, the lower the initial loss. However, with convergence, too many or too few queries will cause convergence to slow down, as shown in the right part of Figure 7. Considering that the second training stage (Multi-task Pre-train) applies 448*448 resolution, where the sequence length of ViTâ s output is (448/14)2 = 1024. Too few queries can result in more information being lost. We finally chose to use 256 queries for the vision-language adapter in Qwen-VL. # E.3 Window Attention vs Global Attention for Vision Transformer Using a high-resolution Vision Transformer in the model will significantly increase the computational cost. One possible solution to reduce the computational cost of the model is to use Window Attention in the Vision Transformer, i.e., to perform Attention only in a window of 224 Ã 224 in most layers of the ViT part of the model, and to perform Attention for the full 448 Ã 448 or 896 Ã 896 image in a small number of layers (e.g. 1 out of every 4 layers) of the ViT part of the model. To this end, we conducted ablation experiments to compare the performance of the model when using Global Attention and Window Attention for ViT.
2308.12966#50
2308.12966#52
2308.12966
[ "2211.01335" ]
2308.12966#52
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
We compare the experimental results for analysing the trade-off between computational efficiency and convergence of the model. Table 10: Training speed of Window Attention vs Global Attention for different input image resolutions Model input resolution & Attention type Training speed 448 Ã 448, Global Attention 448 Ã 448, Window Attention 896 Ã 896, Global Attention 896 Ã 896, Window Attention 10s / iter 9s / iter 60s / iter 25s / iter As shown in Figure 8 and Table 10, the loss of the model is significantly higher when Window Attention instead of Vanilla Attention is used. And the training speeds for both of them are similar. Therefore, we decided to use Vanilla Attention instead of Window Attention for the Vision Transformer when training Qwen-VL.
2308.12966#51
2308.12966#53
2308.12966
[ "2211.01335" ]
2308.12966#53
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
22 2.0 â 896x896, Window Attention â 896x896, Global Attention â 448x448, Window Attention â 448x448, Global Attention 18 16 14 Loss 12 10 08 0.6 1000 2000 3000 4000 5000 Steps Figure 8: Visualization of the Loss when using Window Attention vs Global Attention The reason we donâ t use Window Attention with 896 Ã 896 resolution is that its training speed is too slow for us. Although it reaches a loss value similar to model with 448 Ã 448 resolution input at 5000 steps. It takes almost 2.5 times longer to train than the model with 448 Ã 448 resolution input. # E.4 Performance on Pure-text Tasks In order to study the effect of multi-modal training on pure-text ability, we show the performance of pure-text tasks of Qwen-VL compared to open-source LLM in Table 11. Qwen-VL uses an intermediate checkpoint of Qwen-7B as the LLM initialization. The reason why we did not use the final released checkpoint of Qwen-7B is that Qwen-VL and Qwen-7B were developed at a very similar period. Because Qwen-VL has a good initialization on LLM by Qwen-7B, it is comparable to many text-only LLMs on pure-text tasks.
2308.12966#52
2308.12966#54
2308.12966
[ "2211.01335" ]
2308.12966#54
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Table 11: Performance on Pure-text Benchmarks of Qwen-VL compared to open-source LLM. Due to the introduction of pure-text data in the multi-task training and SFT stage, Qwen-VL do not compromise any pure-text ability. Model MMLU CMMLU LLaMA-7B 35.1 26.8 - LLaMA2-7B 46.8 31.8 32.5 Baichuan-7B 42.3 44.4 42.8 Baichuan2-7B 54.2 57.1 54.0 ChatGLM2-6B 47.9 48.8 51.7 InternLM-7B 51.0 51.8 52.8 Qwen-7B (final released) 58.2 62.2 63.5 Qwen-7B (intermediate, use as Qwen-VLâ s LLM initialization) 49.9 - 48.5 Qwen-VL 50.7 49.5 51.1 Furthermore, in the multi-task training and SFT stages, Qwen-VL not only utilizes visual and language-related data but also incorporates pure-text data for training.
2308.12966#53
2308.12966#55
2308.12966
[ "2211.01335" ]
2308.12966#55
Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
The purpose of this is to prevent the catastrophic 23 forgetting of text comprehension by leveraging the information from pure-text data. The results in Table 11 indicate that the Qwen-VL model does not exhibit any degradation in terms of its pure text capability and even demonstrates improvement after multi-task training. 24
2308.12966#54
2308.12966
[ "2211.01335" ]
2308.12682#0
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
4 2 0 2 n a J 1 ] I A . s c [ 2 v 2 8 6 2 1 . 8 0 3 2 : v i X r a # SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge # Rishi Hazra1 # Pedro Zuidberg Dos Martires1 Luc De Raedt1,2 2KU Leuven # {rishi.hazra, pedro.zuidberg-dos-martires, luc.de-raedt}@oru.se https://rishihazra.github.io/SayCanPay/ # Abstract Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast â world knowl- edgeâ .
2308.12682#1
2308.12682
[ "2302.13971" ]
2308.12682#1
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowl- edge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actionsâ feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches. # Introduction With the rise of Large Language Models (LLMs), there has been a growing interest in leveraging their generative capabilities for planning tasks (Huang et al. 2022a; Valmeekam et al. 2022; Silver et al. 2022; Liu et al. 2023). These models have the ability to generate long-horizon plans, capitalizing on their extensive â
2308.12682#0
2308.12682#2
2308.12682
[ "2302.13971" ]
2308.12682#2
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
world knowledgeâ gained from training on vast amounts of data (e.g. eggs are typically stored in the refrigerator, and placing an apple in the fridge will cool it). Such expansive knowledge can be exploited to plan in an open-world context (Ding et al. 2023). Moreover, planning in the natural language space offers significant flexibility especially, with the advent of multimodal foundation models (Lakhotia et al. 2021; Du et al. 2022; Brohan et al. 2023). Such models have made it easier to represent various modalities such as vision, speech, and even actions in the form of natural language, thus bypassing the need to have domain-specific knowledge (e.g. PDDL) that traditional planning approaches require. However, LLM-based planning often faces challenges, particularly in generating feasible plans. It can fail to model action affordances (or pre-conditions)1 due to difficulty in modeling the state of the world (e.g. grab milk from the fridge even if the door is closed) or having a pretrained world model that is not aligned with the current environment (e.g. using a controller to regulate the heater where only a knob exists), leading to infeasible plans. Moreover, such models focus greedily on the next actionable step without considering its relevance to the ultimate goal, resulting in longer, cost-inefficient plans (Valmeekam et al. 2023). Recent works like SayCan (Ahn et al. 2022) have sought to address the affordance problem by using pretrained skills to evaluate the actionâ s executability â Can the action be executed in the current state?
2308.12682#1
2308.12682#3
2308.12682
[ "2302.13971" ]
2308.12682#3
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
However, the plan cost remains a concern. In contrast, traditional planning provides an established approach to developing a sequence of actions to transition from an initial state to a goal state. It uses a domain file (with action models defined in PDDL specifying pre- and post- conditions) and heuristic search planners like Fast Downward (Helmert 2006) to ensure feasibility through grounding in preconditions, and generating cost-effective plans by employing search trees to select the best (or shortest) sequence of actions. However, obtaining a domain file for complex real-world environments is difficult, and its use restricts planning to a closed-world setting. These methods also struggle to handle partial observations, although approximate planning (Kaelbling, Littman, and Cassandra 1998) can alleviate it. Integrating LLMs with classical planning offers a promising research path, merging the generative abilities and (open) world knowledge of LLMs with the methodological rigor of planning algorithms. To this end, we extend the following contributions. (1) We propose to frame language model planning in the context of heuristic planning, which to 1In robotics, affordances refer to possible actions that can be executed, which is conceptually similar to inferring preconditions in planning â what actions are feasible in a certain situation.
2308.12682#2
2308.12682#4
2308.12682
[ "2302.13971" ]
2308.12682#4
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
SayCanPay â oal: pick up the box. Say >) nitial State: Room 1 has gent, red key, green ball. â oom 2 has purple box. The loor connecting Room 1 and : pick up green ball â oom 2 is locked. The green drop ball in void all is blocking the door. pick up purple box! : Step 1: pick up green ball : drop ball in void || Step 2: drop ball in void : pick up red key Step 3: pick up red key : toggle red door Step 4: toggle red door : drop key in void || Step 5: drop key in void : pick up purple box || Step 6: pick up purple box : done task Step 7: done task Step 1: : drop key in void Net Say : pick up purple box pick up red key : done task pick up green ball infeasible actions sub-optimal actions _feasible and cost-effective toggle red door || 0.00 done task | 9.00
2308.12682#3
2308.12682#5
2308.12682
[ "2302.13971" ]
2308.12682#5
SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge
Figure 1: Figure illustrates how SayCanPay scores the next action in BabyAI environment (Chevalier-Boisvert et al. 2019). Given inputs: goal g and initial observation o0, the Say model generates candidate actions with associated probabilities. These are then scored for feasibility by the Can model and for payoff by the Pay model. Here, the Can model deems both pick up red key and pick up green ball equally probable (i.e. both preconditions are satisfied). However, the Pay model ensures a better payoff for pick up green ball. We compare plans generated by Say, SayCan, and SayCanPay scoring. Say scoring can lead to infeasible plans and SayCan to feasible but longer plans.
2308.12682#4
2308.12682#6
2308.12682
[ "2302.13971" ]