doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.02151
12
Spurious Actions LLMs are not pre-trained or designed with an action-agent application in mind. Even some restrictions are explicitly specified in the prompt, the LLM model may still generate spurious actions that are not in the action space A. 3 Limited Prompt Length LLM itself is stateless. However, in applications it is preferred to em- power agents with states or memories for better performance. It has been observed that LLM based agents are easy to run into infinite loops if the states are not handled nicely. Many LLM agents concatenate all the previous state descriptions and actions into the prompt so that LLM as a way to bestow ”state” to the LLM. Inevitably this methodology runs into the prompt length issues. As the trajectory grows longer, the prompt runs out of spaces. Heuristic Prompt Engineering Even though a lot of paradigms have been proposed to improve LLM agents’ performance (Yao et al., 2023; Ahn et al., 2022), there is a lack of systematic method- ologies for consistent model refinement. In fact, manual prompt tuning is still widely used in a lot of the application scenarios.
2308.02151#12
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
12
3 fo) ° _ 68.8 60 = 44.0 fe) £40 rd 38.5 36.7 34.4 : | Q £°0 | 11.9 0 Rec OCR Know Gen Spat Math (Recognition) (Knowledge) (Language (Spatial generation) awareness) (a) ge 30 28.4 § 20 £ fe) S 6.4 © 45.5 5.5 5.0 41 a . 3.7 3.7 3.2 18 14 09 09 0.5 Rec Rec OCR OCR Rec OCR OCR Rec Rec Rec Rec Rec OCR Rec OCR Rec Know Spat Spat Spat Math Know OCR OCR OCR OCR KnowKnow Gen OCR Gen Math Know Gen Spat Spat Spat Spat Spat Gen Spat Math (b) Figure 2: MM-Vet proportion of capabilities. (a) The proportion of each capability. The sum of the proportion is larger than 100% because most samples have more than one capability. (b) The proportion of capability integrations. The sum of the proportion is equivalent to 100%. # 3 MM-Vet # 3.1 Data collection
2308.02490#12
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
13
Prohibitive Training Most of the well-performing LLMs are too large to be fit in just one or two GPUs. It is technically challenging to optimize the LLMs directly as is done in the the classical reinforcement learning setting. In particular, OpenAI has not provided any solution for RL based finetuning. Most of the issues are caused by the fact that LLMs are not pre-trained or designed with an action-agent application in mind. 4 INTUITION Compared to the LLM-based action agents, classical RL agents, though not able to handle text-based environments as nicely in the zero shot setting, are able to keep improving based on the feedback and rewards provided by the environment. Popular RL algorithms include Policy Gradient (Sutton et al., 2000), Proximal Policy Optimization Algorithm (PPO) (Schulman et al., 2017), Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), and Advantage Actor Critic methods (Mnih et al., 2016).
2308.02151#13
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
13
# 3 MM-Vet # 3.1 Data collection Our aim is to develop a multimodal benchmark that requires comprehensive capabilities, corre- sponding to realistic scenarios an AI agent might encounter. Consider, for instance, this scenario: Awakening from slumber, you reach out for your smartphone (recognition capability) to check the current time (OCR capability). Today, your plan is to visit a new grocery that you have not been to. Guided by the information that the grocery is situated directly opposite the stadium and next to the cinema (spatial awareness), you manage to locate it successfully. Keeping in mind your doctor’s advice to shed some weight, you consciously steer clear of high-calorie food and choose milk, vegetables, and fruits instead (knowledge capability). In the dairy aisle, you’re faced with a choice between two types of pure milk. The first is 4 dollars for one liter with 20% discount, while the second is 7 dollars for 1.5 liter with 25% discount. After some quick arithmetic, you find the former is cheaper (math capability) and and opt for the one-liter package. After shopping, you walk past the cinema and find a person pointing to the poster to introduce a new movie (language generation). From the scenarios of interest, we summarize the following six core VL capabilities for evaluation, with corresponding MM-Vet examples shown in Tables 10-15.
2308.02490#13
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
14
In this draft we are proposing a simple but powerful novel framework to tackle the challenges men- tioned above. On one hand, we would like to leverage the classical RL based optimization algorithms such as policy gradient to improve the model performance. On the other hand, our framework avoids finetuning on the LLM directly. The key is, instead of training the LLM directly, we train a retro- spective LM. The retrospective LM takes users’ prompt, rewards and feedback from the environment as input. Its output will be prompt for the actual LLM to be consumed. RL algorithms are employed to optimize the weights in the retrospective LM model instead of directly on the LLM. In our frame- work the weights in the actual LLM is assumed to be fixed (untrainable), which aligns well with the application scenario when the LLM is either too large to tune or prohibited from any tuning.
2308.02151#14
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
14
From the scenarios of interest, we summarize the following six core VL capabilities for evaluation, with corresponding MM-Vet examples shown in Tables 10-15. • Recognition (Rec). Recognition refers to the general visual recognition capability, including recognizing scenes, objects, object attributes (color, material, shape, etc), counting, and various other high-level visual recognition tasks in computer vision. • Knowledge (Know). The knowledge category covers various knowledge-related capabilities, including social and visual commonsense knowledge, encyclopedic knowledge, and time- sensitive knowledge like news. This capability necessitates that the model not only possesses such knowledge, but also effectively utilizes it to solve complicated tasks as required. • OCR. Optical character recognition (OCR) refers to the scene text understanding and reasoning capability. The models are tested to read the scene text in images, and reason over the texts to solve various tasks. 4 • Spatial awareness (Spat). Spatial awareness embodies a diverse spectrum of capabilities related to understanding space, including the comprehension of the spatial relationship among object and scene text regions. • Language generation (Gen). Language generation is a vital ability that empowers models to articulate their responses in a clear, engaging, and informative manner. We use questions that demand more extended answers for language generation capacity evaluation. • Math. Math evaluates the model’s arithmetic capability in solving either written equations or problems in the wild.
2308.02490#14
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
15
Another perspective viewing our framework is, we train a retrospective LM to apply automatic prompt tuning for the LLM agents. In this case, the RL algorithms such as policy gradients are employed to optimize the prompts. Ideally the retrospective LM can help summarize the past “ex- perience”, the users’ prompt, the environments’ feedback into a condensed text with length limit so that it is easier for the LLM to digest. To some extent, in our setting the original LLM can be considered as part of the environment since its parameters are all fixed. # 5 NOTATION AND FORMULATION We denote a large language model (LLM) based agent as a function Lξl : M → A, where M is the space of prompts, which may include the actual prompts mu provided by the users, as well as some contextual information c ∈ C. Here C is the space of context as a representation of the current state S returned by the environment Ω. A is the space of actions. Note the actions taken by most language model based agents are sampled auto-repressively, so L is a random function. The subscript ξl denotes the re-parameterized random variables involved in the sampling process. Another note is, the LLM-based agent itself is stateless. All the states and possible memorization are characterized as text in the prompt.
2308.02151#15
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
15
• Math. Math evaluates the model’s arithmetic capability in solving either written equations or problems in the wild. In real-world scenarios, various complicated multimodal tasks would require the integrations of different core VL capabilities. For instance, explaining visual jokes as shown in Table 10(a) requires recognition, knowledge of humor, and language generation; reading documents and solving math problems as shown in Table 11(a) takes OCR, spatial awareness and math; and answering exam questions given images as shown in Table 14(b) needs OCR, knowledge, spatial awareness. To solve these complicated tasks, LMMs are expected to seamlessly integrate different VL capabilities. Therefore, it is crucial to establish a benchmark that evaluates the performance of these integrated abilities within LMMs.
2308.02490#15
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
16
The environment is defined as a tuple (Tξo, R). Tξo : S × A → S is the state transition function, where S is the space of states and A is the action space. Here we assume the states and actions are represented using text. Again we used ξo to represent the randomness involved in the state transition. 4 For each state s € S, a reward function is defined as R : S + R. At each step of the play, the state s is described using natural language, and integrated into the context c. In the context, previous states may also be described and embedded to help LLMs making a good guess on the next action to take. As in all the reinforcement learning setting, the final goal is to maximize the cumulative rewards, or returns Geum = an R(s;). In many situations, the rewards are sparse, i.e., R(s,) are mostly zero except very few states. The retrospective model takes the all the previous states s1,··· ,t, actions a1,··· ,t, rewards r1,··· ,t, and the user prompt mu as input, and massage those information into a new prompt m to be consumed by the LLM agent: i ]t Γξr,Θ : [Si, Ai, Ri, Mu i=1 → M, (1)
2308.02151#16
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
16
To build the benchmark, we have gathered 187 images from various online sources and ask 205 questions, each of which requires one or more capabilities to answer. As shown in Tables 10-15, these questions are varied in type and entail open-ended responses of differing lengths. The ground truths for 155 questions are human-annotated, while the remainder of the answers for 50 questions were gathered from the Internet. In addition to the 187 images, ten extra images with high-quality questions are collected from VCR [88], with the questions and answers modified to an open-ended answering format. Another three images are from ChestX-ray14 [78] to obtain corresponding medical expert knowledge. In total, our MM-Vet contains 200 images, and 218 questions (samples), all paired with their respective ground truths. For each question, we have also identified the capacities required to answer them and displayed this information statistically in Figure 2. # 3.2 LLM-based evaluator for open-ended model outputs
2308.02490#16
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
17
i ]t Γξr,Θ : [Si, Ai, Ri, Mu i=1 → M, (1) where ξr stands for the randomness involved in the retrospective model, and Θ is the set of learnable parameters in the retrospective model. The goal of the RL optimization is # T T arg max Ee, ¢.,€, » a) s.t. St41 = Te, (se. Le ols,.e [si,@i,7i, mi']i-1)) , we{l,---,T-1} (2) Note that the only learnable parameters are in the retrospective model Mr. Since LLM agent model is fixed, it can be considered as a component of the environment. Specifically, if we construct another environment with the transition function T ′ = T (S, •) ◦ L : S × M → S, and the same reward function R, then the objective in Eq. (2) is just a regular RL optimization so all the popular RL algorithms apply. # 6 OUR APPROACH: REINFORCING RETROSPECTIVE LANGUAGE AGENT
2308.02151#17
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
17
# 3.2 LLM-based evaluator for open-ended model outputs Questions and expected responses in MM-Vet are designed to be open-ended to cover the diverse real-world scenarios. This naturally poses a great challenge in terms of model evaluation and metric design. Drawing inspiration from recent NLP studies [19, 92] that utilize LLMs for open-ended evaluations, we leverage GPT-4 to assist evaluation. As shown in Table 1, we craft a few-shot prompt for model evaluation. The few-shot design allows us to define the scoring metrics via in-context examples and supports easy extension onto new problem sets. Specifically, our implemented prompt incorporates five in-context examples with open-ended short answers and two examples with long answers. We cover examples that are fully correct (i.e., 1.0) or incorrect (i.e., 0.0), as well as examples used to define different types of “partially correct” responses. The LLM-based evaluator allows any style of model outputs to be evaluated with a unified consistent metric. Furthermore, it also supports easy adaptation to diverse question types and answer styles by simply modifying the evaluation examples. By inputting the prompt, GPT-4 automatically generates scores for each sample, conditioned on each sample’s input question, ground truth, and model output. The score for each sample ranges from 0 to 1. The total scores are computed by
2308.02490#17
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
18
# 6 OUR APPROACH: REINFORCING RETROSPECTIVE LANGUAGE AGENT As illustrated in Fig. 2, our proposed framework Retroformer is comprised of two language model components: an actor LLM, denoted as Ma, which generates reasoning thoughts and actions, and a retrospective LLM, denoted as Mr, which generates verbal reinforcement cues to assist the actor in self-improvement by refining the actor prompt with reflection responses. Episode Reflection Retrospective Reflection’) = Retums ~ prompt LM Reflection ED Trial i Trial +1 i ‘ ——,, Env1 SEDs __,.Reneehen vespenseail—» Ev ul Leni Trajectory _, prompt —» Actor LM Env 2Retums __," pefigeHonresponse 2» EnV 2Retums (Sy, Gy, My eee, St) | Gy) Gait 2 Env K Returns Env k Returns 42 i >» ReflectionresponseK > (AyrSie1) Environment 1 Ne Action a, Gui Gries —————~__ Environment 2 t . . os . Rating for reflection response k Environment K T= AGKi= Gxin1 — Gki (a) Retrospective agent (b) Ratings for reflection responses Figure 2: Framework overview.
2308.02151#18
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
18
N d si S= x 100%, qd) N where si is the score of sample i, and N is the sample number. The score regarding each capability or capability integration can be similarly obtained by So= Xs x 100%, iec, (2) c where C is the set of samples requiring a specific capability or capability integration, and Nc is the sample number of the set. 5 Table 1: Few-shot prompt for evaluating model outputs using GPT-4, where Q is a sample’s question, G is the ground truth and P is the model output for the sample. In the prompt, there are examples with short and long open-ended answers, enabling the evaluation of diverse answer styles. Taking the prompt filled with Q, G and P, GPT-4 will generate a soft grading score from 0 to 1.
2308.02490#18
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
19
Figure 2: Framework overview. We assume in this paper that the actor model is a frozen LLM whose model parameters are in- accessable (e.g., GPT, Bard) and the retrospective model is a smaller, local language model that can be fine-tuned under low-resource settings. In addition, Retroformer has an iterative pol- icy gradient optimization step which is specifically designed to reinforce the reflection model with gradient-based approach. We provide in this section a detailed description of each of these modules and subsequently elucidate their collaborative functioning within the Retroformer framework. The implementation details are presented in Section 7.2. 5 6.1 RETROSPECTIVE AGENT ARCHITECTURE As illustrated in Fig. 2(a), for the base actor model and retrospective model, we apply a standard communication protocol modified from the Relexion agent architecture (Shinn et al., 2023), in which the retrospective model refines the actor prompt by appending verbal feedback to the prompt.
2308.02151#19
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
19
Compare the ground truth and prediction from AI models, to give a correctness score for the prediction. <AND> in the ground truth means it is totally right only when all elements in the ground truth are present in the prediction, and <OR> means it is totally right when any one element in the ground truth is present in the prediction. The correctness score is 0.0 (totally wrong), 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, or 1.0 (totally right). Just complete the last space of the correctness score.
2308.02490#19
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
20
Actor Model The actor model is a LLM hosted in the cloud, whose model parameters are hidden and frozen all the time. The actor LM is instructed to generate actions with required textual content, taking into account the observed states. Similar to reinforcement learning, we select an action or generation, denoted as at, from the current policy πθ at time step t and receive an observation, represented by st, from the environment. We use ReAct (Yao et al., 2023) as our actor prompt. ari. = Ma ([sk,i75 Qk,i.rs rkitloas Sk,i,t) : (3) Retrospective Model The retrospective model Mr is instantiated as a local LM. Its primary func- tion is to produce self-reflections, offering valuable feedback for diagnosing a possible reason for prior failure and devising a new, concise, high-level plan that aims to mitigate same failure. Operat- ing under a sparse reward signal, such as binary success status (success/failure), the model detects the root cause of failure by considering the current trajectory alongside its persistent memory. yk,i = Mr([sk,i,τ , ak,i,τ , rk,i,τ ]T (4) # T leat» @k,i)# a Reflection prompt «ry.
2308.02151#20
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
20
Question | Ground truth | Prediction | Correctness — | — | — | — What is x in the equation? | -1 <AND> -5 | x = 3 | 0.0 What is x in the equation? | -1 <AND> -5 | x = -1 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -5 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -5 or 5 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -1 or x = -5 | 1.0 Can you explain this meme? | This meme is poking fun at the fact that the names of the countries Iceland and Greenland are misleading. Despite its name, Iceland is known for its beautiful green landscapes, while Greenland is mostly covered in ice and snow. The meme is saying that the person has trust issues because the names of these countries do not accurately represent their landscapes. | The meme talks about Iceland and Greenland. It’s pointing out that despite their names, Iceland is not very icy and Greenland isn’t very green. | 0.4 Can you explain this meme? | This
2308.02490#20
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
21
# T leat» @k,i)# a Reflection prompt «ry. This self-reflection feedback yk,i is appended to the actor prompt to prevent repetitive errors in a specific environment in future attempts. Consider a multi-step task, wherein the agent failed in the prior trial. In such a scenario, the retrospective model can detect that a particular action, denoted as at, led to subsequent erroneous actions and final failure. In future trials, the actor LM can use these self-reflections, which are appended to the prompt, to adapt its reasoning and action steps at time t, opting for the alternative action a′ t. This iterative process empowers the agent to exploit past experiences within a specific environment and task, thereby avoiding repetitive errors. Memory Module The actor model generates thoughts and actions, by conditioning on its recent interactions (short-term memory) and reflection responses (long-term memory). • Short-term memory. The trajectory history τi of the current episode i serves as the short-term memory for decision making and reasoning. • Long-term memory. The reflection responses that summarize prior failed attempts are appended to the actor prompt as the long-term memory.
2308.02151#21
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
21
pointing out that despite their names, Iceland is not very icy and Greenland isn’t very green. | 0.4 Can you explain this meme? | This meme is poking fun at the fact that the names of the countries Iceland and Greenland are misleading. Despite its name, Iceland is known for its beautiful green landscapes, while Greenland is mostly covered in ice and snow. The meme is saying that the person has trust issues because the names of these countries do not accurately represent their landscapes. | The meme is using humor to point out the misleading nature of Iceland’s and Greenland’s names. Iceland, despite its name, has lush green landscapes while Greenland is mostly covered in ice and snow. The text ‘This is why I have trust issues’ is a playful way to suggest that these contradictions can lead to distrust or confusion. The humor in this meme is derived from the unexpected contrast between the names of the countries and their actual physical characteristics. | 1.0 Q | G | P |
2308.02490#21
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
22
• Long-term memory. The reflection responses that summarize prior failed attempts are appended to the actor prompt as the long-term memory. To facilitate policy optimization in Section 6.2, we store the prompts and responses of the retrospec- tive model of each trial, as instruction-response pairs, together with the episode returns in a local dataset, which we call replay buffer. We sample from the replay buffer to reinforce the retrospective model at the end of each episode. The long and short-term memory components provide context that is specific to a given task over several failed trials and the replay buffer provides demonstrations of successful reflections across the tasks and environments, so that our Retroformer agent not only exploits lessons learned over failed trials in this task, but also explores by learning from successful reflections in other tasks. • Replay buffer. The memory DRL which stores the triplets (xk,i, yk,i, Gk,i) of the reflection prompt xk,i, reflection response yk,i and episode return Gk,i of trial i and task k. Reward Shaping We apply reward shaping to the binary rewards for obtaining more information. For question answering tasks, instead of exactly matching the answer, we use f1 score grading to evaluate the alignment of the generated output with the expected answer as the reward function. 6.2 POLICY GRADIENT OPTIMIZATION
2308.02151#22
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
22
# 4 Evaluation results # 4.1 Experiment settings We utilize MM-Vet to evaluate two types of LMMs, i.e., (1) end-to-end tuned LMMs (OpenFlamingo [4, 9, 8], BLIP-2 [46], LLaVA [51], MiniGPT-4 [93], Otter [44] and InstructBLIP [23]); (2) LLM- tool-using methods (MM-ReAct [85] and Transformers Agent [38]). The summary of these methods is shown in Table 2. As shown in Table 1, for each sample, we fill the prompt template with its question, ground truth, and output from a specific LMM. By taking the filled prompt into GPT-4, GPT-4 will generate a score from 0 to 1 for the sample. It is found that outputs of GPT-4 still exist variance, although the temperature is set as 0. Therefore, we utilize GPT-4 to evaluate the outputs of LLMs by 5 times. Due to the space limit, we report average scores for capabilities/capability integrations, and average as well as variance for total score. # 4.2 Result analyses The main results of different methods are shown in Table 3 regarding each capability, and Table 4 for each capability integration. 6
2308.02490#22
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
23
6.2 POLICY GRADIENT OPTIMIZATION The actor model Ma is regarded as an frozen LLM, such as GPT, with inaccessible model parame- ters. In this scenario, the most direct approach to enhancing actor performance in a given environ- ment is by refining the actor LM’s prompt. Consequently, the retrospective model Mr, a smaller 6 local language model, paraphrases the actor’s prompt by incorporating a concise summary of errors and valuable insights from failed attempts. We therefore aim to optimize the Mr model using en- vironment reward. The desired behavior of Mr is to improve the actor model Ma in next attempts. Hence, the difference in episode returns between two consecutive trials naturally serve as a reward signal for iteratively fine-tuning the retrospective model Mr, acting as a surrogate for reinforcement learning aided by gradient-based signals. The specifics of this approach are outlined as follows.
2308.02151#23
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
23
# 4.2 Result analyses The main results of different methods are shown in Table 3 regarding each capability, and Table 4 for each capability integration. 6 Table 2: Summary of the evaluated LMMs in this report. We consider both the end-to-end tuned models (i.e., OpenFlamingo [4, 9, 8], BLIP-2 [46], LLaVA [51], MiniGPT-4 [93], LLaMA-Adapter v2 [32], Otter [44], InstructBLIP [23]), and the LLM-tool-using systems (i.e., MM-ReAct [85] and Transformers Agent [38]).
2308.02490#23
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
24
Reflection prompt x PPO Reflection response y Ratings trainer (ios aM aac Lone ue GLMCA, | gotstuckin loop where kept trying to search forthe English ee ie = in peer sat alms to mitigate the same failure. \ actor who appeared in both Pennies From Heaven and Kenneth, Gaara Retr fi y Williams: Fantabulosal, but the search term was too general.| r= 0.92 Reflection: etrospective {should have broken it down by searching for the English actor w LM | who appeared in both TV series. Input: trajectory 1._ —————— | : J |, I directly looked for the next team he coached after WSU. Previous _— “trial: Question: What is the capital of France? Thought 1:Ineedto » r=-0.31 Input: trajectory K_§ ———— search ‘France’ and look for the capital. Action 1: Figure 3: Policy gradient optimization of retrospective LM using RLHF training pipeline.
2308.02151#24
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
24
Method OpenFlamingo-9B [4, 9, 8] Vision CLIP ViT-L/14 [66] Initial models Language MPT-7B [2] Other – Tuning data Multimodal C4 [94] BLIP-2-12B [46] EVA-ViT-G [26] Flan-T5-XXL [21] – 1. COCO [49]; 2. Visual Genome [41]; 3. CC3M [69]; 4. CC12M [15]; 5. SBU [62]; 6. 115M images from the LAION-400M [68]. (CapFilt [47] is used to create synthetic captions for the web images) LLaVA-7B [51] LLaVA-13B [51] CLIP ViT-L/14 [66] Vicuna-7B [92] Vicuna-13B [92] – 1. CC3M [69] Concept-balanced 595K [51]; 2. LLaVA-Instruct-158K [51]. LLaVA-7B (LLaMA-2) [51] LLaVA-13B (LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] CLIP ViT-L/336px [66]
2308.02490#24
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
25
Figure 3: Policy gradient optimization of retrospective LM using RLHF training pipeline. Instruction and Response Generation The retrospective model generates a pair of instruction and response at the end of each trial i in the environment k. In the trial i, the actor produces a trajectory τi by interacting with the environment. The evaluator then produces a score ri which is computed as rt = Me(τi). After the trial, to produce verbal feedback for refining the actor prompt, Mr takes the set of {τi, ri} as the instruction xk,i and is prompted to produce a reflection response yk,i. All these instruction-response pairs (xk,i, yk,i) across tasks and trials are stored to a local dataset DRL, which we call “replay buffer”, for fine-tuning the Mr at the end of each episode.
2308.02151#25
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
25
(LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] CLIP ViT-L/336px [66] Vicuna-13B-v1.3 [92] MiniGPT-4-8B [93] LLaMA-2-7B-Chat [74] LLaMA-2-13B-Chat [74] CLIP ViT-L/14 [66] Vicuna-7B [92] EVA-ViT-G [26] MiniGPT-4-14B [93] Vicuna-13B [92] – BLIP-2’s Q-Former [46] 1. LAION /CC/SBU BLIP-Caption Concept-balanced 558K [51]; 2. LLaVA-Instruct-80K [51]. 1. CC3M [69]; 2. CC12M [15]; 3. SBU [62]; 4. LAION-400M [68] 5. Proposed 3,500 aligned image-text pairs [93]. LLaMA-Adapter v2-7B [32] CLIP ViT-L/14 [66] LLaMA-7B [73] – 1. LAION-400M [68]; 2. COYO-700M [14]; 3.
2308.02490#25
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
26
Reflection Response Rating As illustrated in Fig. 2(b), let us assume a reflection prompt xk,i and the corresponding episode return Gk,i, and the retrospective model Mr generates the response yk,i that summarizes the mistakes in i, which results in the return Rk,i+1 in the next attempt i + 1. Because the actor is a frozen LM and the temperature is set to zero as default (Yao et al., 2023), i.e., T = 0, the injected randomness that leads to differences in returns ∆Gk,i = Gk,i+1 − Gk,i are from the reflection responses yk,i, in which positive ∆Gk,i indicates better responses that help the actor learn from prior errors, and hence should be rated with higher scores; negative or zero ∆Gk,i indicates worse responses that needs to be avoided and hence should be rated with lower scores. Therefore, we define the rating score of a reflection instruction-response pair (xk,i, yk,i) as: r(xk,i, yk,i) ≜ Gk,i+1 − Gk,i. (5)
2308.02151#26
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
26
[66] LLaMA-7B [73] – 1. LAION-400M [68]; 2. COYO-700M [14]; 3. Multimodal C4 [94]; 4. SBU [62]; 5. CC12M [15]; 6. COCO [49]; 7. GPT-4-LLM [64]; 8. Tuning data of LLaVA [51] Otter-9B [44] CLIP ViT-L/14 [66] MPT-7B [2] OpenFlamingo-9B’s [4, 9, 8] 1. Perceiver Resampler; 2. GATED XATTN-DENSE MIMIC-IT [43] InstructBLIP-8B [23] InstructBLIP-14B [23] EVA-ViT-G [26] Vicuna-7B [92] Vicuna-13B [92] BLIP-2’s Q-Former [46] 1. Tuning data of BLIP-2 [46]; 2. 26 publicly available datasets (transformed into instruction tuning format). Transformers Agent (GPT-4 as agent) [38] – 1. GPT-4 [59]; 2. Flan-T5 [21]; 3. BART [42] 1. Donut
2308.02490#26
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
27
r(xk,i, yk,i) ≜ Gk,i+1 − Gk,i. (5) Proximal Policy Optimization The optimization step of Retroformer is formalized as an it- erative process in Fig. 3. We use the differences of episode returns as the ratings of the generated reflection responses. The retrospective language model is fine-tuned at the end of each episode following the standard RLHF training procedures (Ouyang et al., 2022) with proximal policy opti- mization: LPPO = Ex∼DRL E y∼LLMRL ϕ (x) r(x, y) − β log LLMRL ϕ (y|x) LLMRef(y|x) , (6) where (x, y) are sampled from the replay buffer, r(x, y) is the defined reward model, and the second term in this objective is the KL divergence to make sure that the fine-tuned model LLMRL does not stray too far from the frozen reference model LLMRef.
2308.02151#27
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
27
as agent) [38] – 1. GPT-4 [59]; 2. Flan-T5 [21]; 3. BART [42] 1. Donut [39]; 2. BLIP [47]; 3. ViLT [40]; 4. CLIPSeg [57] 5. Whisper [67]; 6. SpeechT5 [7]; 7. NLLB [22] None MM-ReAct-GPT-3.5 [85] MM-ReAct-GPT-4 [85] – GPT-3.5 [63] GPT-4 [59] 1. Azure Cognitive Services APIs [10] for image captioning, image tagging, dense captioning, OCR and specialized recognition on celebrities, receipts, etc 2. Bing search; 3. PAL [31] None Total params 9B 12B 7B 13B 7B 13B 13B 8B 14B 7B 9B 8B 14B Not clear Not clear
2308.02490#27
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
28
The actor and retrospective model, and the policy gradient optimization module work together through trials in a loop until the environment deems τt to be correct. In practice, we apply of- fline RL methods instead of online optimization. We collected the dataset DRL by rolling out a base policy, i.e., the frozen actor LM and the initialized retrospective LM, in the tasks in the training sets for N trials and compute the ratings. We apply the standard RLHF pipeline to fine-tune the retro- spective model offline before evaluating the agent in the validation tasks. In online execution, we use best-of-n sampler, with the scores evaluated by the learned reward model from RLHF pipeline, as an alternative method of generating better retrospective responses in each trial. 7 # 7 EXPERIMENTS Extensive experiments are conducted to evaluate our method, including comparisons with ReAct and Reflexion performances, and visualization and discussion of agent’s generated text and actions. 7.1 ENVIRONMENT AND DATASET We use open-source HotPotQA (Yang et al., 2018) environment, which consists of search-based question answering tasks to evaluate the agent’s tool usage abilities under large state-action space. HotPotQA The agent is asked to solve a question answering task by searching in Wikipedia pages. At each time step, the agent is asked to choose from three action types or API calls:
2308.02151#28
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
28
# 4.2.1 Regarding each capability Recognition. The “Recognition” category contains the questions requiring recognition capability to answer. Examples are shown in Tables 10(a, b), 11(b), 12(a, b), 13(a, b), 14(a, c), and 15(b). The “Rec” column in Table 3 compares the performance on the “Recognition”. Among the evaluated models, LLaVA-13B (LLaMA-2) is the best one, obtaining 39.2%. There may be two reasons. First, LLaVA-13B (LLaMA-2) adopts ViT-L/14 [24] from CLIP [66] as a vision model, which is trained by a large amount of data, 400 million image-text pairs; 2) Second, it is surprising that stronger language model can largely boost the recognition performance. LLaVA-13B (LLaMA-2) obtains 8.3% important over LLaVA-13B (Vicuna-13B). Stronger LLMs may help understand questions better and identify key information from visual inputs.
2308.02490#28
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
29
HotPotQA The agent is asked to solve a question answering task by searching in Wikipedia pages. At each time step, the agent is asked to choose from three action types or API calls: 1. SEARCH[ENTITY], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. 2. LOOKUP[KEYWORD], which returns the next sentence containing keyword in the last passage successfully found by Search. 3. FINISH[ANSWER], which returns the answer and finishes the task. Dataset We collected 3,383 reflection samples by running the base rollout policy for 3 trials (N = 3) for 3,000 tasks in the training set, in which 1,084 instruction-response pairs have positive ratings. IMPLEMENTATION DETAILS
2308.02151#29
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
29
LLaMA-Adapter v2-7B is another strong model in recognition, achieving 38.5%. This outstanding ability may be obtained from its various and large amounts of tuning data, LAION-400M [68], COYO-700M [14], Multimodal C4 [94] and Tuning data of LLaVA [51] etc as shown in Table 2. Besides, InstructBLIP-8B [23] attains 32.4%. As shown in Table 2, the tuning data of InstructBLIP includes 26 publicly available datasets, which contain recognition heavily datasets, like VQA v2 [35] and GQA [37]. The promising capability of InstructBLIP in recognition may benefit from these datasets.
2308.02490#29
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
30
IMPLEMENTATION DETAILS Model We use GPT-3 (model: text-davinci-003) as the frozen actor model. For the retrospective model, we instantiate it from LongChat (model: longchat-7b-16k), which is a LM with 16k context length by fine-tuning llama-7b on instruction-following samples from ShareGPT. In all experiments, we set the temperature of actor LM as zero, i.e., T=0 and top p =1 to isolate the randomness of LM from the effects of reflections. We acknowledge that setting a higher temperature value can encourage exploration but it can obscure the impact of the proposed approaches, making it difficult to compare against existing baselines with T=0 (Yao et al., 2023; Shinn et al., 2023). Setup Our proposed learning framework is developed by using multiple open-source tools as fol- lows. We use the OpenAI connectors from langchain to build our actor models Ma. During in- ference of the retrospective model, we host an API server using FastChat and integrates it with langchain agents. The tool can host longchat-7b-16k with concurrent requests to speed up RL pol- icy rollouts. For fine-tuning the retrospective model, we develop our training pipeline with trl, which supports transformer reinforcement learning with PPO trainer.
2308.02151#30
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
30
OCR. OCR assesses models’ capabilities in recognizing scene texts in images and performing various types of reasoning including math, spatial, recognition, etc. Examples are shown in Tables 10(c), 11(a, c, d), 12(b), 13(a, b), 14(a, b), 15(a, b). As shown in Table 2’s “OCR” column, MMReAct-GPT4 [85] performs the best (65.7%) in OCR capability with the assistance of an external OCR model as a tool. Among end-to-end tuned models, LLaVA-13B (LLaMA-2) [51] achieves the highest performance (22.7%). This superior performance may be attributed to LLaVA’s adoption of CLIP [66] ViT-L/14 [24] as its vision model, and the inclusion of a large volume of image-OCR pairings within the training data [55]. Knowledge. As depicted in Tables 10(a), 12(a, b) and 14(b, c), the “knowledge” category covers a wide range of knowledge-related questions, ranging from joke understanding to encyclopedia 7
2308.02490#30
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
31
We present the details of the specific prompts we used and the full agent demonstrations and exam- ples for each environment in Appendix A. Training We fine-tune the retrospective model Mr with 4-bit quantized LoRA adapters (r=1) on the offline RL datasets with epochs=4; batch size=8; lr=1.4e-5. The number of trainable parameters is 0.53M (0.015% of llama-7b). We first run supervised fine-tuning trainer on the samples with positive ratings for 2 epochs and then the RLHF pipeline, including reward modeling, and RL fine- tuning with PPO, on the whole offline rating dataset using the default settings for llama-7b model. Evaluation Metrics We report the success rate and the average episode returns over validation tasks in an environment. The agent is evaluated on 100 validation tasks from the distractor dev split of open-source HotPotQA dataset. Baselines We experiment with two language agent baselines. • ReAct (Yao et al., 2023). This is the language agent architecture which AutoGPT (Gravitas, 2023) is built upon. At the same time, this agent does not learn from the environment rewards at all and can serve as a baseline for showing how the actor model performs without exploration and exploitation in the environment. 8
2308.02151#31
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02151
32
8 • Reflexion (Shinn et al., 2023). This is the state-of-the-art language agent architecture that the au- thors identify from literature so far. This agent enhances from verbal feedback of the environment, and does not use gradient signals explicitly. It can serve as a baseline for showing the effectiveness of gradient-based learning for reasoning and planning. 7.3 COMPARISONS WITH BASELINES We present the performances of our Retroformer in Table 2 and compare it against the baselines in Fig. 4. As shown in the table, we observe that the generated reflections consistently improve the agent performances over trials and the effects of reflection sampling are significant in the first few trials. This is expected as exploration in the early trials benefits exploration of the state action space. Table 2: Retroformer success rate (succ%) and average reward in N trials and best-of-n shots reflection sampling scored by the learned reward model in HotPotQA environment. The standard deviation is calculated for the average rewards over 100 tasks and n shots.
2308.02151#32
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
32
Model Transformers Agent (GPT-4) [38] MiniGPT-4-8B [93] BLIP-2-12B [46] LLaVA-7B [51] MiniGPT-4-14B [93] Otter-9B [44] OpenFlamingo-9B [4, 9, 8] InstructBLIP-14B [23] InstructBLIP-8B [23] LLaVA-13B [51] MM-ReAct-GPT-3.5 [85] LLaVA-7B (LLaMA-2) [51] LLaMA-Adapter v2-7B [32] LLaVA-13B (V1.3, 336px) [51] LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] Rec OCR Know Gen 3.2 18.2 13.9 27.4 7.0 27.5 18.9 28.0 22.1 29.9 13.8 27.3 13.1 28.7 9.0 30.8 18.2 32.4 26.4 30.9 24.2 20.7 20.1 32.9 33.4 38.5 25.8 38.1 29.3 39.2 35.0 33.1 3.9
2308.02490#32
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
33
Environment Settings 1 shot 2 shots 4 shots HotPotQA N=0 (initial trial) ———————- 34%/0.43 ± 0.46 ———————- 39%/0.45 ± 0.47 42%/0.48 ± 0.46 45%/0.52 ± 0.47 N=1 42%/0.48 ± 0.46 45%/0.52 ± 0.47 48%/0.54 ± 0.47 N=2 50%/0.55 ± 0.48 52%/0.58 ± 0.46 53%/0.60 ± 0.46 N=4 Furthermore, as shown in Fig. 4, our agent outperforms the two strong base- lines. the results indicate that our reinforced model provides the language agents with better reflection re- sponses in early trials, which enables the agents to learn faster, while also achiev- ing better performances in the end. Our Retroformer agent achieves 53% suc- cess rate in 5 trials, which is better than the state-of-the-art 50% success rate reported in (Jang, 2023) that uses a much larger language model, i.e., GPT-3 (model: text- davinci-003) as the retrospective compo- nent. The results show the effectiveness of our policy gradient approach for fine- tuning the agent with offline samples.
2308.02151#33
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
33
24.2 20.7 20.1 32.9 33.4 38.5 25.8 38.1 29.3 39.2 35.0 33.1 3.9 15.0 11.1 17.1 16.1 17.8 16.7 16.0 14.6 20.1 31.5 20.1 20.3 22.3 22.7 65.7 2.2 12.8 11.8 16.3 20.4 14.2 16.4 9.8 16.5 23.5 21.5 19.0 31.4 25.2 26.5 29.0 Spat Math 4.0 12.4 7.7 20.3 5.8 16.2 11.5 21.2 3.8 22.2 3.8 24.4 7.7 21.0 10.5 21.1 7.7 18.6 7.7 24.3 26.2 32.3 5.2 25.7 3.8 22.9 11.2 31.3 7.7 29.6 69.2 56.8 Total 13.4±0.5 22.1±0.1 22.4±0.2 23.8±0.6 24.4±0.4 24.7±0.3 24.8±0.2 25.6±0.3 26.2±0.2
2308.02490#33
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
34
# HotPotQA (100 distractor tasks) ——— a a oo 05 F008 "episede iD 5 30 38 40 w ce) = & S 6 —e- Retroformer+4-shot sampling —e Reflexion ==- ReAct Success rate (%) w a Figure 4: We compare Retroformer with base- lines and observe consistent and faster performance im- provement in terms of success rate. 7.4 DISCUSSION OF THE REINFORCED AGENT BEHAVIORS We examine how the retrospective model Mr is improved with policy optimization by comparing the generated responses from the frozen retrospective LM and the ones from the fine-tuned, reinforced LM. We observe two general emerging behaviors that the fine-tuned LM learns to follow through trial and error. These two reinforced behaviors help the agents outperform the baselines in Fig. 4.
2308.02151#34
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02151
35
Precise Credit Assignment for Failed Trials. As shown in Fig. 5, we gave an example that il- lustrates why the reinforced responses better analyze the mistakes made in the prior failed trial and enable the agent to prevent from repetitive errors in the next attempt. The agent failed in the last attempt mainly because it submits both “Teen Tians” and “Teen Titans Go” as the answer to the Teen Titans spin-off series, while the correct answer should only include “Teen Titans Go”; The agent forgot its original goal during a chain of thought, actions and lengthy observations. While the reflection response from the frozen model seems to have a lot “should have” terms and is very fluent in language, it simply reiterates the prior action sequences that lead to failure as the steps that the agent should have done, which prompts the agent to follow these steps again in the next attempt, resulting in an infinite loop. 9
2308.02151#35
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
35
knowledge. LLaVA-Adapter v2-7B is the best model in this capability with a score of 31.4%, as shown in Table 3. It may be beneficial from its large-scale tuning data including GPT-4-LLM [64]. MMReAct-GPT-4 [85] also achieves a remarkable score (29.0%) in this capability, because of its strong LLM backbone [59], coupled with external tools like Bing search for knowledge acquisition. Language generation. “Language generation” denotes the proficiency to produce fluent and infor- mative text outputs, as illustrated in Table 10(a), 12(b), 13(a), and 15(a). The performance within this category is highly correlated with the efficacy of language modeling. As a result, MMReAct-GPT4 [85] and LLaVA-13B (LlaMA-2) stand out as the top two models. Their success can be attributed to the GPT-4 and LlaMA-2 language models on which these systems are built.
2308.02490#35
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
36
Question Prior failed trial Reflection (Green: Retroformer; Brown: Frozen LM) Thought 1: I need to search Juliet Starling and find the Canadian-American actress who voiced her, then find the Teen Titans spinoff series she has done voice roles for. Action 1: Search{Juliet Starling] Observation 1: Could not find [Juliet Starling] 1 failed to find the Teen Titans spinoff series that Tara Strong has done voice roles for. Instead, I found Lollipop Chainsaw, a video game that Juliet Starling appeared in, I should have searched Tara Strong and Teen Titans or Teen Titans Go! specifically in my previous action to find the correct answer. : Juliet Starling is a character from Lollipop so I can search Lollipop Chainsaw. Lollipop Chainsaw featured Juliet Action 2: Search{Lollipop Chainsaw] Starling, who was voiced bya Observation 2: Lollipop Chainsaw is a Canadian-American actress who has done voice roles for what Teen Thought 3: Juliet Starling is voiced by Tara Strong, so I Titans spinoff series? need to search Tara Strong and find the Teen Titans spinoff series she has done voice roles for. Action 3: Search{Tara Strong] Observation
2308.02151#36
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
36
Spatial awareness. “Spatial awareness” involves the understanding of the spatial relationship among visual object regions (e.g., Table 10(c)) and scene text regions (e.g., Table 13(a, b)). MMReAct-GPT4 [85] has a significant lead in this capability (56.8%), because the adopted tools, such as dense captioning and OCR, provide detailed object and scene text location information in the form of coordinates, which can be understood and processed by GPT-4. When it comes to end-to-end tuned models, LLaVA-13B (V1.3, 336px) exhibits the best performance of 31.3%. The tuning data for LLaVA is partly derived from capturing object names and their corresponding coordinates as input. This procedure ensures the generation of data imbued with spatial information, potentially aiding the models in developing and enhancing their spatial awareness capabilities. Math. “Math” measures the arithmetic capability on either written equations (e.g., Table 15(b)) or problems in the wild (e.g., Table 11(d)). Notably, MMReAct-GPT4 [85] consistently outperforms other models. This superior performance may be attributed to the adopted PAL math tool (Program- aided Language Models) [31]. # 4.2.2 Regarding each capability integration
2308.02490#36
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
37
Titans spinoff series? need to search Tara Strong and find the Teen Titans spinoff series she has done voice roles for. Action 3: Search{Tara Strong] Observation 3: Tara Lyn Strong. My thought process was strong but my execution was flawed. I should have searched for Lollipop Chainsaw first and looked up the Canadian-American actress who voiced Juliet Starling afterwards. I also should have looked up Tara Strong's filmography and searched for any voice roles she did specifically for Teen Titans or Teen Titans Go!.Next trial: Question: The novel was adapted into a film by a Thought 4: Tara Strong has voiced characters in Teen girector known for his work on another iconic 1980s franchise. What Titans and Teen Titans Gol, so the answer is Teen Titans 9, jg this film based on? Thought |: and Teen Titans Gol. Action 4: Finish[Teen Titans and Teen Titans Go!]
2308.02151#37
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
37
# 4.2.2 Regarding each capability integration Recognition, knowledge, and language generation.. As shown in Table 10(a), this capability integration can enable models to explain visual jokes. LLaMA-Adapter-v2-7B [32] is the best model in this capability integration. This may be attributed to its large scale of tuning data as shown in Table 2. LLaVA-13B (LLaMA-2) and LLaVA-13B (V1.3, 336px) [51] are the other two outstanding models. Stronger language models may be the reason. The tuning data of LLaVA shown in Table 2 can also not be ignored. 8 Table 4: MM-Vet evaluation results on various LMMs regarding each capability integration. Examples of each capability integration are shown in supplementary materials Tables 10-15. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
2308.02490#37
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
38
Figure 5: Response refinement from the reinforced retrospective model. Note that the lengthy ob- servation step in the prior failed trial column is abbreviated for better presentation purposes. On the contrary, our reinforced response prompts the agent to focus on spinoff series in the very beginning of the response and asks the agent to find the answer in the previous actions and obser- vations with special attention on the search results of “Tara Strong”. In summary, our proposed reinforcement learning approach presents better credit assignment and root cause analysis abilities, and has the power to recommend real actionable insights for the next attempt.
2308.02151#38
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
38
Model Transformers Agent (GPT-4) [38] MiniGPT-4-8B [93] BLIP-2-12B [46] LLaVA-7B [51] MiniGPT-4-14B [93] Otter-9B [44] OpenFlamingo-9B [4, 9, 8] InstructBLIP-14B [23] InstructBLIP-8B [23] LLaVA-13B [51] MM-ReAct-GPT-3.5 [85] LLaVA-7B (LLaMA-2) [51] LLaMA-Adapter v2-7B [32] LLaVA-13B (V1.3, 336px) [51] LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] OCR Rec Rec Rec Spat OCR Know OCR Know Spat OCR Math Spat Gen Rec Math 0.0 7.4 45.8 0.0 1.3 49.1 0.0 0.0 14.4 14.3 50.0 20.8 14.2 47.9 9.6 0.0 38.9 7.3 41.7 21.2 7.1 65.1 11.5 4.5 28.9 17.1 46.6
2308.02490#38
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
39
Frozen model response Reinforced model response I think the issue might have been with the search. I was not able to find the name "Miller" in the paragraph, which is why I was not able to find James Henry Miller's wife. I should have searched for "Miller's wife" instead of "James Henry Miller's wife" to avoid this confusion. I got stuck in a loop where I kept searching for the same person, Henry Miller, instead of Beatrice Sylvas Wickens. It's possible that I may have misheard or misunderstood the question. To avoid this, I should have written down the question and made sure I understood it correctly before beginning the search. Additionally, I should have tried searching for "James Henry Miller's wife" instead of just "James Henry Miller". New Plan: To avoid confusion, I will search for "Miller's wife" instead of "James Henry Miller's wife” and directly look for the nationality of the wife. Figure 6: Response refinement from the reinforced retrospective model with structured format.
2308.02151#39
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
39
9.6 0.0 38.9 7.3 41.7 21.2 7.1 65.1 11.5 4.5 28.9 17.1 46.6 13.3 21.4 41.7 24.8 0.0 11.1 50.0 16.7 21.1 47.5 14.6 0.0 11.1 15.6 54.1 29.2 50.0 22.5 0.0 38.9 15.5 48.6 15.4 14.3 58.3 40.5 0.0 11.1 74.3 14.6 14.3 50.0 19.2 8.1 6.5 23.3 69.9 15.4 14.3 33.3 20.8 18.0 0.0 18.0 25.2 41.1 17.3 47.5 23.3 7.1 9.1 33.3 19.1 33.1 28.8 35.7 28.3 60.0 9.1 18.8 57.0 26.9 9.7 34.7 0.0 50.0 26.7 12.2 50.0 38.5 0.0 35.3 54.1 13.5 7.1 27.8 8.2 25.5 59.7 25.0 14.3 66.7 25.8 29.8 59.5 21.2
2308.02490#39
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
40
Figure 6: Response refinement from the reinforced retrospective model with structured format. Structured Reflections and Action Plans. We discover one emergent behavior of the reinforced model that it can automatically paraphrase the original responses into two separate structured sec- tions, namely Reflection section and New plan: section, although not being explicitly trained or prompted for. One such example is shown in Fig. 6. The paraphrased response retrospects in the first paragraph and provides actionable insights next, while the response from the frozen LM in- terleaved both parts in one paragraph, making it hard to comprehend. We can also observer from Fig. 5 that the reinforced response removes the messy, irrelevant “Next trial:” content in the end for cleaner format, which may very likely result from LLM hallucination. # 8 CONCLUSION In this study, we present Retroformer, an elegant framework for iteratively improving large language agents by learning a plug-in retrospective model. This model, through the process of policy optimization, automatically refines the prompts provided to the language agent with environmental feedback. Through extensive evaluations on real-world datasets such as HotPotQA, the method has been proven to effectively improve the performances of large language agents over time both in terms of learning speed and final task completion performances.
2308.02151#40
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
40
13.5 7.1 27.8 8.2 25.5 59.7 25.0 14.3 66.7 25.8 29.8 59.5 21.2 14.3 58.3 36.2 27.8 0.0 22.5 33.0 69.2 78.6 25.0 83.0 63.6 44.4 7.1 7.1 Rec OCR OCR Know Know Spat Gen 0.0 0.0 8.0 0.7 14.3 25.0 16.7 5.2 0.0 6.2 50.0 6.6 50.0 18.7 38.5 18.3 32.5 23.1 46.5 33.3 3.2 6.0 6.0 4.5 28.6 50.0 10.0 15.2 14.3 70.0 16.7 8.8 0.0 35.2 15.7 25.0 7.8 53.8 14.3 50.0 12.5 50.0 2.5 25.0 100.0 0.0 47.8 10.2 44.8 14.3 50.0 11.3 22.5 38.0 28.6 48.0 53.3 11.2 49.3 14.3 50.0 33.3 56.8 28.6 50.0 33.3 3.5 0.0
2308.02490#40
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
41
By considering the Actor LLM as a component of the environment, our policy gradient approach allows learning from arbitrary reward signals from diverse environments and tasks. This facilitates the iterative refinement of a specific component within the language agent architecture – the retro- spective model, in our case, while circumventing the need to access the Actor LLM parameters or propagate gradients through it. This agnostic characteristic renders Retroformer a concise and adaptable plug-in module for different types of cloud-hosted LLMs, such as GPT and Bard. Fur- thermore, our approach is not limited to enhancing the retrospective model alone; it can be applied 10 to reinforce other components within the agent architecture, such as the memory and summarization module, or the actor prompt. By selectively focusing on the component to be fine-tuned while keep- ing the remainder fixed, our proposed policy gradient approach allows for iterative improvements of the component with reward signals obtained from the environment. # REFERENCES Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022.
2308.02151#41
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
41
48.0 53.3 11.2 49.3 14.3 50.0 33.3 56.8 28.6 50.0 33.3 3.5 0.0 68.2 88.0 14.3 50.0 Rec Rec OCR Rec OCR Gen OCR Spat Spat 9.5 25.0 0.0 21.2 42.9 50.0 8.5 45.2 Rec OCR OCR Rec Spat Gen Know Math Spat Spat 13.4±0.5 50.0 49.0 0.0 22.1±0.1 0.0 0.0 0.0 22.4±0.2 0.0 50.0 0.0 23.8±0.6 0.0 19.0 0.0 24.4±0.4 0.0 0.0 0.0 24.7±0.3 0.0 30.0 0.0 24.8±0.2 0.0 0.0 0.0 25.6±0.3 0.0 50.0 15.0 0.0 26.2±0.2 0.0 0.0 26.4±0.1 12.0 0.0 0.0 35.0 80.0 27.9±0.1 0.0 28.1±0.4 0.0 0.0 0.0
2308.02490#41
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
42
# Harrison Chase. Langchain. https://github.com/hwchase17/langchain, 2023. Significant Gravitas. Auto-GPT, 2023. Autogpt. https://github.com/Significant-Gravitas/ Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023. Eric Jang. Can llms critique and iterate on their own outputs? evjang.com, Mar 2023. URL https://evjang.com/2023/03/26/self-reflection.html. Aman Madaan, Alexander Shypula, Uri Alon, Milad Hashemi, Parthasarathy Ranganathan, Yiming Yang, Graham Neubig, and Amir Yazdanbakhsh. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023a.
2308.02151#42
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02151
43
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023b. Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016. # Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
2308.02151#43
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
43
Recognition (sole). This category contains samples only requiring recognition, as shown in Table 10(b). InstructBLIP-14B and InstructBLIP-8B [23] achieve the best performance, which may result from the tuning data including recognition datasets, like VQA v2 [35] and GQA [37]. OCR and spatial awareness. For this integration, an example is shown in Table 10(c). MM-ReAct- GPT-4 [85] is the best method for this integration. Notably, compared with MM-ReAct-GPT-3.5, MM-ReAct-GPT-4 has a significant improvement, over 40%, indicating the importance of LLMs to integrate information of OCR and location. OCR, spatial awareness, and math. An example of this integration is shown in Table 11(a), which requires reading the floor plan and conducting arithmetic. Compared with the above integration, this combination involves one more capability of math. The observation is similar to the integration of OCR and spatial awareness. MM-ReAct-GPT-4 [85] still achieves the best performance.
2308.02490#43
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
44
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744, 2022. Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
2308.02151#44
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
44
Recognition and spatial awareness. Table 11(b) shows an example for this integration. LLaVA- 13B (V1.3, 336px) [51] performs best for this category. Compared with LLaVA-13B (LLaMA-2), LLaVA-13B (V1.3, 336px) obtains an improvement of 8.4%, indicating the significant contribution of larger resolution of images. OCR (sole). This task requires OCR only, as shown in Table 11(c). MM-ReAct-GPT-4 [85] has the best results for sole OCR due to an OCR tool from Azure API. Notable, MM-ReAct-GPT-4 is much better than MM-ReAct-GPT-3.5 with an improvement of 23.0%, demonstrating the importance of language models in OCR. OCR and Math. This integration enables reading text from real-world scenarios and solving math problems, as shown in Table 11(d). MM-ReAct-GPT-4 [85] obtains the best performance in this capability integration, far ahead of other models. We highly recommend using MM-ReAct-GPT-4 to complete tasks related to this capability integration.
2308.02490#44
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
45
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. 11 Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning. arXiv preprint arXiv:2206.11871, 2022.
2308.02151#45
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
45
Other capability integrations. 9 other capability integrations are in long-tailed distribution, where MMReAct-GPT-4 achieves the best scores in 5 integrations out of 9. Their examples are shown in Tables 12-15. # 4.3 Result discussion # 4.3.1 Foundation models and tuning data In this subsection, we discuss the modules in LMMs and speculate how each component may affect the LMMs’ capabilities in different aspects, evaluated by MM-Vet. We mainly consider the models based on open-sourced LLMs, i.e., Flan-T5 [21], LLaMA [73], Vicuna [92], and LLaMA-2 [74]. 9 Table 5: MM-Vet (Bard set) evaluation results on various LMMs regarding each core VL capability. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%. Total 30.3±0.1 31.5±0.1 27.6±0.2 48.1±0.2 53.5±0.2
2308.02490#45
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
46
R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, volume 12, pp. 1057–1063. MIT Press, 2000. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd. html. Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, and Nathan Lambert. Trl: Transformer reinforcement learning. https://github.com/lvwerra/trl, 2020. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question In Conference on Empirical Methods in Natural Language Processing (EMNLP), answering. 2018.
2308.02151#46
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
46
Model LLaVA-13B (LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] MM-ReAct-GPT-3.5 [85] MM-ReAct-GPT-4 [85] Bard [34] Rec OCR Know Gen 27.6 37.8 24.6 39.4 16.6 22.3 36.6 34.3 61.0 56.2 Spat Math 8.0 27.2 11.6 30.6 24.0 32.9 72.0 60.6 39.6 52.0 22.9 22.3 31.4 66.3 52.5 22.4 22.7 15.6 25.6 50.9 Table 6: MM-Vet (Bard set) evaluation results on various LMMs regarding each capability integration. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
2308.02490#46
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
47
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. 12 Appendix for # “Retroformer: Retrospective Large Language Agents with Policy Gradi- ent Optimization” A FULL EXAMPLES A.1 ACTOR PROMPT ENGINEERING An example of the HotPotQA actor language model prompt is shown below.
2308.02151#47
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
47
OCR Rec OCR Spat Know Gen Rec Math Math 26.6 55.2 18.8 14.3 57.1 39.5 0.0 8.2 9.1 Model Rec Know 20.0 Vicuna-13B (LLaMA-2) [51] 20.0 Vicuna-13B (V1.3, 336px) [51] 21.9 59.0 22.9 14.3 85.7 25.5 20.0 11.3 38.8 31.2 35.7 28.6 56.4 MM-ReAct-GPT-3.5 [85] 17.0 35.2 70.8 78.6 28.6 81.5 63.6 40.0 MM-ReAct-GPT-4 [85] 0.0 52.3 70.3 45.2 56.4 42.9 70.2 18.2 Bard [34] OCR Spat Rec Spat OCR Rec Rec OCR OCR OCR Know Gen Know Spat Spat Gen 1.3 56.8 28.6 50.0 33.3 15.0 49.3 14.3 50.0 33.3 25.0 100.0 0.0 0.0 68.3 88.0 14.3 50.0 0.0 77.7 81.5 28.6 50.0 66.7 Rec
2308.02490#47
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
48
Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be three types: (1) Search[entity], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. (2) Lookup[keyword], which returns the next sentence containing the keyword in the last passage successfully found by Search. (3) Finish[answer], which returns the answer and finishes the task You may take as many steps as necessary. You have attempted to answer the following question before and failed. The following reflection(s) give a plan to avoid failing to answer the question in the same way you did previously. Use them to improve your strategy of correctly answering the given question. Reflections: - I should have searched for the title of the book instead of the author. I also didn't check for the proverb which was not the question. - I should have searched for "It Takes a Family by Rick Santorum" instead of just "It Takes a Family" to get the correct answer. - My search was not concise and I did not specify what I was looking for. I also did not check similar results. In this case, I
2308.02151#48
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02151
49
Family" to get the correct answer. - My search was not concise and I did not specify what I was looking for. I also did not check similar results. In this case, I should have searched for 'It Takes a Family by Rick Santorum' to find the correct answer. Question: It Takes a Family is a response to this 1996 book that was published by who Thought 1: I need to search for the title of the book, "It Takes a Family", and find who published it. Action 1: Search["It Takes a Family"] Observation 1: Richard John Santorum ( san-TOR-om; born May 10, 1958) is an American politician, attorney, author, and political commentator who represented Pennsylvania in the United States Senate from 1995 to 2007. He was the Senate's third-ranking Republican during the final six years of his tenure. He also ran unsuccessfully for President of the United States in the 2012 Republican primaries, finishing second to Mitt Romney.Santorum was elected to the United States Senate from Pennsylvania in 1994, He served two terms until losing his 2006 reelection bid. A Roman Catholic, Santorum is a social conservative who opposes abortion and same-sex marriage and embraced a cultural warrior image during his Senate tenure. While serving as a senator, Santorum authored the Santorum
2308.02151#49
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
49
Vision. For the Vision component, two models have been employed in the end-to-end LMMs we evaluated, namely, CLIP-ViT/L14 [66] (428M) and EVA-ViT-G (1.13B). Determining a superior model is currently not possible due to the absence of a comprehensive ablation study [90]. However, it’s noteworthy that, when paired with the same language model, Vicuna-7B, InstructBLIP-8B excels in recognition tasks, while LLaVA-7B works particularly well for OCR. Language. There is a notable trend indicating that superior language models (LLMs) typically yield better performance, such as comparing the 7B and 13B variants of different models, except for the outlier of InstructBLIP where the 8B version performs better than the 14B one. Tuning data. Increasing the volume of data can enhance performance. An example is InstructBLIP- 8B [23], which utilizes more data from 26 publicly available datasets to tune the model and achieve higher scores than BLIP-2-12B. # 4.3.2 Comparison with Bard
2308.02490#49
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
50
conservative who opposes abortion and same-sex marriage and embraced a cultural warrior image during his Senate tenure. While serving as a senator, Santorum authored the Santorum Amendment, which would have promoted the teaching of intelligent design. He was a leading sponsor of the 2003 federal law known as the Partial-Birth Abortion Ban Act.In the years following his departure from the Senate, Santorum has worked as a consultant, private practice lawyer, and news contributor. He ran for the Republican nomination in the 2012 U.S presidential election. Before suspending his campaign on April 10, 2012, Santorum exceeded expectations by winning 11 primaries and caucuses and receiving nearly four million votes, making him the runner-up to eventual nominee Mitt Romney. Santorum ran for president again in 2016, but ended his campaign in February 2016 after a poor showing in the Iowa caucuses. In January 2017 he became a CNN senior political commentator. However, he was terminated from his contract with CNN in May 2021 due to comments he made about Native Americans a few weeks prior which were deemed "dismissive". Thought 2: The paragraph mentions Rick Santorum, so I need to search "It Takes a Family by Rick Santorum". Action
2308.02151#50
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
50
# 4.3.2 Comparison with Bard Bard [34] is one popular closed-source commercial LMM system. One problem in evaluation is that Bard rejects images containing people and instead outputs “Sorry, I can’t help with images of people yet.” To conduct a fair comparison with other models, we constructed a subset of MM-Vet with 168 samples that Bard could process, henceforth referred to as the Bard set. The results on the Bard set are shown in Tables 5 and 6. Bard achieves the highest scores in three out of six capabilities, seven out of fifteen capability integrations, and holds the highest overall score (53.5%). MM-ReAct-GPT-4 [85] outperforms in the remaining three out of six capabilities, and tops the chart in nine out of the fifteen capability integrations. Particularly, MM-ReAct performs better in OCR, spatial awareness, and math capabili- ties, indicating the potential benefit of having specialized external tools, even when working with state-of-the-art LMMs. When considering end-to-end models, there is still a big gap from Bard. For instance, Vicuna-13B (V1.3, 336px) [51] obtains 31.5%, a substantial 22.0% lower than Bard. Future stronger open-sourced LLMs and advancements in multimodal training hold potential to further narrow this gap.
2308.02490#50
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
51
were deemed "dismissive". Thought 2: The paragraph mentions Rick Santorum, so I need to search "It Takes a Family by Rick Santorum". Action 2: Search["It Takes a Family by Rick Santorum") Observation 2: It Takes a Family is a 2005 book by then Pennsylvania Senator Rick Santorum. The title is a response to the 1996 book It Takes a Village by then-First Lady Hillary Clinton. In the book, Santorum states that the family structure is necessary. He argues that liberal social policies have devastated the family structure and that prevailing socially liberal attitudes have systematically devalued families across the board. He argues that the government should take a proactive role in promoting strong families. Thought 3: It Takes a Family was published by Rick Santorum, so the answer is Rick Santorum. Action 3: Finish[Rick Santorum] Observation 3: Answer is INCORRECT
2308.02151#51
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
51
# 4.3.3 Comparison with GPT-4V(ision) We evaluate and benchmark the state-of-the-art LMM, GPT-4V(ison) [59–61, 1, 84] on MM-Vet. In our queries to GPT-4V, we prepend the prompt with “Generate a short and concise response to the following image text pair.” The quantitative results are shown in Tables 7, 8, and the qualitative 10 Table 7: MM-Vet evaluation results on LLaVA, MM-ReAct and GPT-4V regarding each core VL capability. For each column, the highest and the second highest figures are highlighted by green and orange backgrounds. All the numbers are presented in % and the full score is 100%. Model LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] GPT-4V [60] Rec OCR Know Gen 29.3 39.2 35.0 33.1 60.7 67.5 22.7 65.7 68.3 26.5 29.0 56.2 Spat Math 7.7 29.6 69.2 56.8 58.6 69.4 Total 32.9±0.1 44.6±0.2 67.7±0.3
2308.02490#51
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
52
13 A.2 RETROSPECTIVE PROMPT ENGINEERING # An example of the HotPotQA retrospective instruction prompt is shown below. You are an advanced reasoning agent that can improve based on self reflection. You will be given a previous reasoning trial in which you were given access to an Docstore API environment and a question to answer. You were unsuccessful in answering the question either because you guessed the wrong answer with Finish[<answer>], or you used up your set number of reasoning steps. In a few sentences, Diagnose a possible reason for failure and devise a new, concise, high level plan that aims to mitigate the same failure. Use complete sentences. Here are some examples. # [Few-Shot Demonstrations] # Previous trial: Question: The 2016 Washington State Cougars were led by the coach who previously helmed which other team? Thought: I need to search Washington State Cougars and find the coach of the 2016 team, then find which other team the coach previously helmed. Action: Search[Washington State Cougars] Observation: The Washington State Cougars (known informally as the Cougs) are the athletic teams that represent Washington State University. Located in Pullman, Washington, WSU is a member of the Pac-12 Conference in NCAA Division I. The athletic program comprises ten women's sports and six men's intercollegiate sports, and also offers various intramural sports.
2308.02151#52
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02151
53
Thought: The paragraph does not mention the coach of the 2016 team. I need to look up "coach" to find the coach of the 2016 team. # Action: # Lookup[coach] Observation: (Result 2/5) During the 2001-03 seasons, the Cougar football teams were distinguished by three ten-win seasons, three top ten poll rankings, and appearances in the Sun, Rose, and Holiday Bowls. The Cougars shared the Pac-10 title in 2002. Alumnus Paul Wulff, WSU's 3lst head coach, was fired in late November 2011, after compiling a 9-40 (.184) record in four seasons. Mike Leach coached the team from 2012 to 2019, guiding them to six bowl games. Leach left to coach Mississippi State in 2020 and was replaced by Hawaii head coach Nick Rolovich. Rolovitch was fired in October 2021 for not complying with the state's COVID-19 vaccine mandate. Defensive coordinator Jake Dickert was named the interim head coach. Thought: The 2016 Washington State Cougars were led by Mike Leach. I need to search Mike Leach and find which other team he previously helmed. # Action: # Search[Mike # Leach]
2308.02151#53
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
53
Rec Know Gen Rec OCR Spat Math Model Rec Spat OCR LLaVA-13B (LLaMA-2) [51] 29.8 59.5 21.2 14.3 58.3 36.2 MM-ReAct-GPT-4 [85] GPT-4V [60] Rec Know 27.8 22.5 33.0 69.2 78.6 25.0 83.0 63.6 44.4 55.5 89.2 68.6 73.9 83.3 77.5 44.5 38.9 OCR Spat OCR Math 0.0 Rec OCR OCR Know Know Spat Gen 33.3 3.5 68.2 88.0 14.3 0.0 78.2 76.5 42.9 100.0 66.7 Rec Rec OCR Gen OCR Rec OCR Spat Spat 56.8 28.6 50.0 50.0 Rec OCR OCR Rec Spat Gen Know Math Spat Spat 0.0 0.0 8.0 50.0 80.0 0.0 50.0 89.0 0.0 Total 32.9±0.1 44.6±0.2 67.7±0.3
2308.02490#53
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02151
54
# Action: # Search[Mike # Leach] Observation: Michael Charles Leach (March 9, 1961 - December 12, 2022) was an American college football coach who primarily coached at the NCAA Division I FBS level. He was a two-time national coach of the year, three-time conference coach of the year and the mastermind behind the NCAA record-setting air raid offense. He was the head coach at Texas Tech University from 2000 to 2009, where he became the winningest coach in school history. After Texas Tech, he coached at Washington State University from 2012 to 2019, where he recorded the third-most wins of any coach in school history. He then coached at Mississippi State from 2020 until his death in 2022.Leach was known for directing offenses using lots of passing to several receivers, in a spread system known as the air raid, which Leach developed with Hal Mumme when Mumme was head coach and Leach was offensive coordinator at Iowa Wesleyan, Valdosta State, and Kentucky in the 1990s. Leach's offenses with Mumme, and later as a head coach himself, have broken numerous school and NCAA records. On eighteen occasions, Leach led his unranked team to victory over a team ranked in the AP poll, which is the most in the AP era. Thought: Mike Leach previously helmed Texas Tech University and Kentucky. So the answer is Texas Tech University and Kentucky. # Action: # Finish[Texas # Tech
2308.02151#54
Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
http://arxiv.org/pdf/2308.02151
Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese
cs.CL, cs.AI
null
null
cs.CL
20230804
20230804
[ { "id": "2303.17580" }, { "id": "2305.18290" }, { "id": "2302.07867" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2304.03442" }, { "id": "2204.01691" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2201.11903" }, { "id": "2305.14992" }, { "id": "2206.11871" } ]
2308.02490
54
results are expressed in Figures 3-6. Remarkably, GPT-4V achieves a score of 67.7%, surpassing both open-sourced LMMs [51] and LLM-based multimodal agents [85] by substantial margins. We aspire that the detailed per-category performance breakdown sheds light on potential avenues for enhancing model capabilities, thereby bridging the existing performance gap. To illustrate, integrating specialized tools within agent systems proves advantageous for specific functionalities like OCR and math. While other categories, such as recognition and language generation, would require enhancements in the core vision and language modules, respectively. Figures 3-6 offer an exhaustive analysis, highlighting exemplary success and failure instances of GPT-4V’s performance. This MM-Vet analysis is intended as a source of inspiration for future research, specifically in the realms of advanced multimodal prompting techniques [84, 81] and model refinements to further improve the LMM performance. # 4.4 Effectiveness analysis of LLM-based evaluation
2308.02490#54
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
55
# 4.4 Effectiveness analysis of LLM-based evaluation To verify the effectiveness of LLM-based evaluation for LMM predictions, we select the outputs from MMReAct-GPT-4 on 138 objective questions, which can be objectively annotated by humans. We compute the absolute value of the difference between the evaluator’s output score and the human- annotated score on each sample. By default, we use GPT-4 (0613) as the evaluator. Here we also replace it with other LLMs, e.g. LLaMA-2, GPT-3.5. The average difference to the human scoring is reported in Table 9, represented as ∆.
2308.02490#55
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
56
The maximum potential discrepancy is 1.0. The baseline evaluation method, keyword matching, results in a high difference of 0.273. This illustrates the unsuitability of keyword matching for MM-Vet when dealing with open-ended answers. It is surprising that ∆ of LLaMA-2-7B [74] is even higher than that of keyword matching, while ∆ LLaMA-2-13B only marginally less than keyword matching. This suggests that assessing open-ended outputs from models is far from straightforward. For OpenAI’s models, GPT-3.5 (turbo-0613) obtains 0.178 of ∆, and GPT-4 (0613) achieves the lowest difference of 0.042. In this paper, we utilize GPT-4 (0613) to evaluate the outputs of LMMs. # 4.5 Takeaway notes We summarize the above analyses and discussions as follows:
2308.02490#56
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
57
# 4.5 Takeaway notes We summarize the above analyses and discussions as follows: • In the evaluation of integrated capabilities on MM-Vet (Sections 4.2, 4.3.2, 4.3.3), GPT- 4V [60] and Bard [34] outperform existing open-sourced methods. The tool-using ap- proach, MM-ReAct-GPT-4 [85], achieves comparable performance to Bard with effective external tools. The pros and cons in different categories motivate future studies on tool- enhanced LMMs. Among end-to-end LMMs, LLaVA-13B (LLaMA-2)/LLaVA-13B (V1.3, 336px) [51] demonstrates the best performance on MM-Vet. 11 Table 9: Averaged absolute differences (∆) between the evaluation scores of various LLM evaluators and those of human-annotated scores, on MM-ReAct-GPT4’s results. A smaller discrepancy indicates a better agreement with the gold standard of human evaluation, indicating a better evaluator.
2308.02490#57
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
58
Model ∆ (↓) Keyword matching 0.273 LLM-based evaluation LLaMA-2-7B LLaMA-2-13B GPT-3.5 (turbo-0613) GPT-4 (0613) 0.307 0.254 0.178 0.042 • Analysis of open-source LMMs (Section 4.3.1) leaves room for ambiguity regarding the superior vision encoders for LMMs, based on current model comparisons. However, it is evident that stronger LLMs can boost the performance of LMMs. • For open-ended evaluation (Section 4.4), it is effective to use GPT-4 for evaluating the open- ended outputs of LMMs. The use of less powerful LLMs could result in more significant deviations from the gold standard of human evaluation results. • Current top-performing methods, such as GPT-4V [60] and MM-ReAct-GPT-4 [85], only achieve scores of around 68%/45% on MM-Vet (where full score is 100%). The gap signifies that further effort is necessary to enhance the performance of LMMs in terms of integrated capabilities, e.g., by developing stronger LMMs or extending LMMs with external tools. # 5 Conclusion
2308.02490#58
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
59
# 5 Conclusion In this paper, we introduce the MM-Vet benchmark to evaluate LMMs in terms of their integrated vision-language capabilities. We have assembled a new multimodal dataset, which requires the integration of multiple vision-language capabilities. To facilitate open-ended evaluation, we adopt an LLM-based evaluator to grade open-ended outputs from LMMs. We then evaluate various LMMs on MM-Vet, analyzing their results to provide insights into different LMM system paradigms and module selections. We observe that the current best LMMs GPT-4V [59–61, 1, 84] achieve around 68% score on MM-Vet (full score 100%), indicating the need for efforts to further improve the integrated capabilities of LMMs. 12 # A Examples of capability integrations Table 10: Three samples requiring different capability integrations. (a) ° V ( N Q: What occasions would someone use this meme? GT: This meme, commonly known as "Screaming Panda," is typically used to express shock, surprise, or fear. It could be used in response to a startling or unexpected event, or to convey a sense of panic or alarm. Some possible occasions where someone might use this meme include:
2308.02490#59
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
61
Q: How many tomatoes are there? GT: 5 # Required capabilities: Recognition (c) ee Q: What is located to the right of the shampoo? GT: # conditioner Required capabilities: OCR, spatial awareness 13 Table 11: Four samples requiring different capability integrations. (a) LL L _— Q: Which room is bigger, the double garage or the living room? GT: double garage Required capabilities: OCR, spatial awareness, math (b) Q: On the right desk, what is to the left of the laptop? GT: table lamp <OR> desk lamp Required capabilities: Recognition, spatial awareness (c)
2308.02490#61
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
62
Q: On the right desk, what is to the left of the laptop? GT: table lamp <OR> desk lamp Required capabilities: Recognition, spatial awareness (c) Q: What are all the scene text in the image? GT: 5:30PM <AND> 88% <AND> Mario Kart 8 Deluxe <AND> MARIO KART 8 DELUXE <AND> SUPER MARIO ODYSSEY <AND> THE LEGEND OF ZELDA <AND> BREATH OF WILD <AND> Options <AND> Start # Required capabilities: OCR (d) Gasoline Dow FAQS 3.552 3.652 oe F602 Q: How many gallons of supreme gasoline can I get with $50? GT: 13.6 <OR> 13.7 Required capabilities: OCR, math 14 Table 12: Two samples requiring different capability integrations. (a)
2308.02490#62
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
63
In which country was this photo taken? # GT: Australia Required capabilities: Recognition, knowledge (b) Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now Q: Can you explain this meme? GT: This meme is a humorous take on procrastination and the tendency to delay tasks until a specific time. The person in the meme plans to do something at 8 o’clock, but when they miss that deadline by a few minutes, they decide to wait until 9 o’clock instead. The image of Kermit the Frog lying in bed represents the person’s laziness and lack of motivation to complete the task. Required capabilities: Recognition, OCR, knowledge, language generation 15 Table 13: Two samples requiring different capability integrations. (a) Thousands 600 Immigration 400 300 Emigration Net “ | I l | l I ° 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Long-Term International Migration, UK, 1999-2008, Q: The graph below shows the long-term international migration, UK, 1999-2008. Summarize the information by selecting and reporting the main features, and make comparisons where relevant. You should write at least 150 words. GT: The chart gives information about UK immigration, emigration and net migration between 1999 and 2008.
2308.02490#63
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
64
GT: The chart gives information about UK immigration, emigration and net migration between 1999 and 2008. Both immigration and emigration rates rose over the period shown, but the figures for immigration were significantly higher. Net migration peaked in 2004 and 2007. In 1999, over 450,000 people came to live in the UK, while the number of people who emigrated stood at just under 300,000. The figure for net migration was around 160,000, and it remained at a similar level until 2003. From 1999 to 2004, the immigration rate rose by nearly 150,000 people, but there was a much smaller rise in emigration. Net migration peaked at almost 250,000 people in 2004. After 2004, the rate of immigration remained high, but the number of people emigrating fluctuated. Emigration fell suddenly in 2007, before peaking at about 420,000 people in 2008. As a result, the net migration figure rose to around 240,000 in 2007, but fell back to around 160,000 in 2008. Required capabilities: Recognition, OCR, language generation, spatial awareness (b)
2308.02490#64
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
65
Q: Which car is on the parking spot 33? GT: # no <OR> empty Required capabilities: Recognition, OCR, spatial awareness 16 Table 14: Three samples requiring different capability integrations. (a) y # Q: GT: Required capabilities: Recognition, OCR Is this apple organic? yes (b) lve whale -s wer whale aa m Kad a. Nog ‘~< fe ran an = ye Te Q: Which are producers in this food web? Phytoplankton <AND> Seaweed Required capabilities: OCR, knowledge, spatial awareness (c)
2308.02490#65
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
66
Q: Does the person bigger than the car? GT: # no Required capabilities: Recognition, knowledge, spatial awareness 17 Table 15: Two samples requiring different capability integrations. (a) Underground Railway Systems. Q: The table below gives information about the underground railway systems in six cities. Summarise the information by selecting and reporting the main features, and make comparisons where relevant. You should write at least 150 words. GT: The table shows data about the underground rail networks in six major cities. The table compares the six networks in terms of their age, size and the number of people who use them each year. It is clear that the three oldest underground systems are larger and serve significantly more passengers than the newer systems. The London underground is the oldest system, having opened in 1863. It is also the largest system, with 394 kilometres of route. The second largest system, in Paris, is only about half the size of the London underground, with 199 kilometres of route. However, it serves more people per year. While only third in terms of size, the Tokyo system is easily the most used, with 1927 million passengers per year. Of the three newer networks, the Washington DC underground is the most extensive, with 126 kilometres of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened in 2001, while the Kyoto network is the smallest and serves only 45 million passengers per year.
2308.02490#66
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
69
(a) Q: How many gallons of supreme GPT-4V: You can get gasoline can | get with $50? approximately 13.7 gallons GT: 13.6 <OR> 13.7 of supreme gasoline with Required capabilities: OCR, math $50 at the price of $3.659 per gallon. GPT4_score: 1.0 (b) Q: What is the price difference between GPT-4\V: The price difference diesel and regular gasoline? between diesel and regular GT: 0.11 gasoline is $0.11, with diesel Required capabilities: OCR, math being more expensive. GPT4_score: 1.0 (c) Q: Based on the image, what is the total GPT-4V: The total international awareness of Tina Fey? international awareness of Tina Fey is 35%. Required capabilities: OCR, spatial GPT4_score: 1.0 (d) Q: Based on the image, in which GPT-4V: Tina Fey has the | osm 8 ox a» | country does Tina Fey have the highest highest level of total level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q: What is
2308.02490#69
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
70
level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q: What is the price for tomatoes? GPT-4V: The price for GT: eight <OR> 8.0 tomatoes is 8.00 as shown Required capabilities: Recognition, spatial awareness on the label in the image. (f) Q: What fruit is the the right of plums? — GPT-4\V: The fruit to the right = GT: orange of the plums are oranges. u Required capabilities: Recognition, spatial GPT4_score: 1.0 awareness R (g) Q: Which car is on the parking spot 33? GPT-4V: The blue car is on GT: no <OR> empty parking spot 33. Required capabilities: OCR, recognition, GPT4_score: 0.0 spatial awareness (h) Q: Which spot is the red car parking GPT-4V: The red car is on? parking on spot number 87. GT: 31 GPT4_score: 0.0 Required capabilities: OCR, recognition, spatial awareness
2308.02490#70
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
71
(c) Q: Based on the image, what is the total GPT-4V: The total international awareness of Tina Fey? international awareness of Tina Fey is 35%. Required capabilities: OCR, spatial GPT4_score: 1.0 (d) Q: Based on the image, in which GPT-4V: Tina Fey has the | osm 8 ox a» | country does Tina Fey have the highest highest level of total level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q: What is the price for tomatoes? GPT-4V: The price for GT: eight <OR> 8.0 tomatoes is 8.00 as shown Required capabilities: Recognition, spatial awareness on the label in the image. (f) Q: What fruit is the the right of plums? — GPT-4\V: The fruit to the right = GT: orange of the plums are oranges. u Required capabilities: Recognition, spatial GPT4_score: 1.0 awareness R (g) Q: Which car is on the parking spot 33? GPT-4V: The blue car is on GT: no <OR> empty parking spot 33. Required capabilities:
2308.02490#71
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
74
Plastic Waste is Out of Control Energy price cap update ICAEW chart of the week (a) Q: Which category did the worst job in plastic recycling, considering the amount of plastic made and thrown out? GT: packaging Required capabilities: OCR, spatial awareness (b) Q: What is the difference in metric tonnes between the amount of plastic produced and the amount discarded? GT: 105 Required capabilities: OCR, spatial awareness (c) Q: Who is the person in the bottom right? 7) GT: Jamie Dornan Required capabilities: Recognition, spatial awareness (d) Q: What is the name of this landmark? GT: baochu pagoda Required capabilities: Recognition (e) Q: How many birds are there? GT: 6 Required capabilities: Recognition (f) Q: What is the estimated average standing charge for the year 2023? GT: 271 Required capabilities: OCR, spatial awareness, math (g) Q: What is the estimated total cost for the year 2023? GT: 23800 Required capabilities: OCR, spatial awareness, math GPT-4V: The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made.
2308.02490#74
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
75
The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric tonnes between the amount of plastic produced and discarded is 105 million metric tonnes (407M - 302M). GPT4_score: 1.0 GPT-4V: The person in the bottom right is Jamie Dornan. GPT4_score: 1.0 GPT-4V: This is the Baochu Pagoda located in Hangzhou, China. GPT4_score: 1.0 GPT-4V: There are six birds in the image. GPT4_score: 1.0 GPT-4V: The estimated average standing charge for the year 2023 is £280. GPT4_score: 0.0 GPT-4V: The estimated total cost for the year 2023 is £5,890. GPT4_score: 0.0
2308.02490#75
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
76
Plastic Waste is Out of Control Energy price cap update ICAEW chart of the week (a) Q: Which category did the worst job in plastic recycling, considering the amount of plastic made and thrown out? GT: packaging Required capabilities: OCR, spatial awareness (b) Q: What is the difference in metric tonnes between the amount of plastic produced and the amount discarded? GT: 105 Required capabilities: OCR, spatial awareness (c) Q: Who is the person in the bottom right? 7) GT: Jamie Dornan Required capabilities: Recognition, spatial awareness (d) Q: What is the name of this landmark? GT: baochu pagoda Required capabilities: Recognition (e) Q: How many birds are there? GT: 6 Required capabilities: Recognition (f) Q: What is the estimated average standing charge for the year 2023? GT: 271 Required capabilities: OCR, spatial awareness, math (g) Q: What is the estimated total cost for GPT-4V: The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric
2308.02490#76
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
77
tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric tonnes between the amount of plastic produced and discarded is 105 million metric tonnes (407M - 302M). GPT4_score: 1.0 GPT-4V: The person in the bottom right is Jamie Dornan. GPT4_score: 1.0 GPT-4V: This is the Baochu Pagoda located in Hangzhou, China. GPT4_score: 1.0 GPT-4V: There are six birds in the image. GPT4_score: 1.0 GPT-4V: The estimated average standing charge for the year 2023 is £280. GPT4_score: 0.0 GPT-4V: The estimated total
2308.02490#77
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]
2308.02490
79
(a) Q: What are all the scene text in the : image? Connecticut lew of 162 GT: Connecticut law of 1642 If any man or Sfany manor woman bo a witch — woman be a witch -- that is, hath or thatis, hathor consultethwitha — consulteth with a familiar spirit -- they shall Semiliar spirit—they skalbeputte be put to death death | Required capabilities: OCR GPT-4V: The scene text in the image is: "Connecticut law of 1642 If any man or woman be a witch -- that is, hath or consulteth with a familiar spirit -- they shall be put to death" GPT4_score: 1.0 (b) Q: What are all the scene text in the image? GT: MR. WILLIAM SHAKESPEARES COMEDIES, HISTORIES, & TRAGEDIES. publifhed according to the True Originall Copies. Required capabilities: OCR (c) Q: What are all the scene text in the image? GT: 5:30PM 88% Mario Kart 8 Deluxe MARIO KART 8 DELUXE SUPER MARIO ODYSSEY THE LEGEND OF ZELDA BREATH OF WILD Options Start Required capabilities: OCR (d) Q: In which country
2308.02490#79
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
http://arxiv.org/pdf/2308.02490
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang
cs.AI, cs.CL, cs.CV, cs.LG
Add results of GPT-4V. Code, data and leaderboard: https://github.com/yuweihao/MM-Vet
null
cs.AI
20230804
20231024
[ { "id": "2302.13971" }, { "id": "2106.13884" }, { "id": "2306.05425" }, { "id": "2204.02311" }, { "id": "2309.17421" }, { "id": "2306.08640" }, { "id": "2305.10403" }, { "id": "2302.04166" }, { "id": "2203.15556" }, { "id": "2210.09263" }, { "id": "2306.14565" }, { "id": "2103.00020" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2204.00598" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2010.11929" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "2307.06281" }, { "id": "1910.13461" }, { "id": "2207.04672" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2305.11175" }, { "id": "2211.10435" }, { "id": "2305.07895" }, { "id": "2307.02469" }, { "id": "2308.01390" }, { "id": "2305.01937" }, { "id": "2303.16634" }, { "id": "2310.11441" }, { "id": "2306.09265" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2304.03277" }, { "id": "2303.12712" }, { "id": "2304.06939" }, { "id": "2110.07205" }, { "id": "2111.02114" }, { "id": "2304.15010" }, { "id": "2305.06500" } ]