doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.16877 | 90 | Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models.
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784â3803.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. | 2307.16877#90 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16789 | 93 | "name": "Prefix", "url": "https://entreapi-faker.p.rapidapi.com/name/ prefix", "description": "Randomly generate a prefix (e.g., Mr., Mrs., etc.)", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "gender", "type": "STRING", "description": "Optional gender.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Array Element", "url": "https://entreapi-faker.p.rapidapi.com/random/ arrayElement", "description": "Randomly select an array element.", "method": "GET", "required_parameters": [], "optional_parameters": [ { "name": "array", "type": "ARRAY", "description": "The list of elements to choose from. Default is ["a", "b", "c"].", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" "name": "Number Value", "url": "https://entreapi-faker.p.rapidapi.com/random/ number", "description": "Randomly generate a number value.", | 2307.16789#93 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16789 | 95 | }, {
}, {
}, {
# "name": "min", "type": "NUMBER", "description": "Minimum value.", "default": ""
}, {
# "name": "max", "type": "NUMBER", "description": "Maximum value.", "default": ""
},
22
Preprint
{
"name": "precision", "type": "NUMBER", "description": "Precision of the number.", "default": "" } ], "tool_name": "EntreAPI Faker", "category_name": "Data" }, { "name": "URL", "url": "https://entreapi-faker.p.rapidapi.com/internet /url", "description": "Randomly generate a URL.", "method": "GET", "required_parameters": [], "optional_parameters": [], "tool_name": "EntreAPI Faker", "category_name": "Data" }
]
} | 2307.16789#95 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 95 | Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. | 2307.16877#95 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16789 | 96 | ]
}
Other Requirements: Please produce ten queries in line with the given requirements and inputs. These ten queries should display a diverse range of sentence structures: some queries should be in the form of imperative sentences, others declarative, and yet others interrogative. Equally, they should encompass a variety of tones, with some being polite, others straightforward. Ensure they vary in length and contain a wide range of subjects: myself, my friends, family, and company. Aim to include a number of engaging queries as long as they relate to API calls. Keep in mind that for each query, invoking just one API wonât suffice; each query should call upon two to five APIs. However, try to avoid explicitly specifying which API to employ in the query. Each query should consist of a minimum of thirty words.
A.8 PROMPTS FOR SOLUTION PATH ANNOTATION
We use the following prompt when searching for the solution path. When expanding the child nodes, we use diversity user prompt, showing the information of previous child nodes.
------------------------------------------------------------------ system_prompt: You are Tool-GPT, capable of utilizing numerous tools and
functions to complete the given task.
1.First, I will provide you with the task description, and your task will commence.
2.At each step, you need to analyze the current status and determine the next course of action by executing a function call. | 2307.16789#96 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16789 | 97 | 2.At each step, you need to analyze the current status and determine the next course of action by executing a function call.
3.Following the call, you will receive the result, transitioning you to a new state. Subsequently, you will analyze your current status, make decisions about the next steps, and repeat this process.
4.After several iterations of thought and function calls, you will ultimately complete the task and provide your final answer.
Remember: 1.The state changes are irreversible, and you cannot return to a
previous state.
23
Preprint
2.Keep your thoughts concise, limiting them to a maximum of five sentences.
3.You can make multiple attempts. If you plan to try different conditions continuously, perform one condition per try. 4.If you believe you have gathered enough information, call the
function "Finish: give_answer" to provide your answer for the task.
5.If you feel unable to handle the task from this step, call the function "Finish: give_up_and_restart".
Letâs Begin! Task description: {task_description} --------------------------------------------------------- diversity_user_prompt: This is not the first time you try this task, all previous trails
failed.
Before you generate your thought for this state, I will first show you your previous actions for this state, and then you must generate actions that is different from all of them. Here are some previous actions candidates: | 2307.16789#97 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 97 | Yizhong Wang, Swaroop Mishra, Pegah Alipoormo- labashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Puro- hit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022b. Super-NaturalInstructions: General- ization via declarative instructions on 1600+ NLP In Proceedings of the 2022 Conference on tasks. Empirical Methods in Natural Language Processing, pages 5085â5109, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. | 2307.16877#97 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16789 | 98 | {previous_candidate} Remember you are now in the intermediate state of a trail, you
will first analyze the now state and previous action candidates, then make actions that is different from all the previous.
--------------------------------------------------------- Finish_function_description: { "name": "Finish", "description": "If you believe that you have obtained a result that can answer the task, please call this function to provide the final answer. Alternatively, if you recognize that you are unable to proceed with the task in the current state, call this function to restart. Remember: you must ALWAYS call this function at the end of your attempt, and the only part that will be shown to the user is the final answer, so it should contain sufficient information.", "parameters": { "type": "object", "properties": { "return_type": { "type": "string", "enum": ["give_answer","give_up_and_restart"], }, "final_answer": { "type": "string", "description": "The final answer you want to give the user. You should have this field if " return_type"=="give_answer"", } }, "required": ["return_type"], } }
24 | 2307.16789#98 | ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs | Despite the advancements of open-source large language models (LLMs), e.g.,
LLaMA, they remain significantly limited in tool-use capabilities, i.e., using
external tools (APIs) to fulfill human instructions. The reason is that current
instruction tuning largely focuses on basic language tasks but ignores the
tool-use domain. This is in contrast to the excellent tool-use capabilities of
state-of-the-art (SOTA) closed-source LLMs, e.g., ChatGPT. To bridge this gap,
we introduce ToolLLM, a general tool-use framework encompassing data
construction, model training, and evaluation. We first present ToolBench, an
instruction-tuning dataset for tool use, which is constructed automatically
using ChatGPT. Specifically, the construction can be divided into three stages:
(i) API collection: we collect 16,464 real-world RESTful APIs spanning 49
categories from RapidAPI Hub; (ii) instruction generation: we prompt ChatGPT to
generate diverse instructions involving these APIs, covering both single-tool
and multi-tool scenarios; (iii) solution path annotation: we use ChatGPT to
search for a valid solution path (chain of API calls) for each instruction. To
enhance the reasoning capabilities of LLMs, we develop a novel depth-first
search-based decision tree algorithm. It enables LLMs to evaluate multiple
reasoning traces and expand the search space. Moreover, to evaluate the
tool-use capabilities of LLMs, we develop an automatic evaluator: ToolEval.
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it
with a neural API retriever to recommend appropriate APIs for each instruction.
Experiments show that ToolLLaMA demonstrates a remarkable ability to execute
complex instructions and generalize to unseen APIs, and exhibits comparable
performance to ChatGPT. Our ToolLLaMA also demonstrates strong zero-shot
generalization ability in an out-of-distribution tool-use dataset: APIBench. | http://arxiv.org/pdf/2307.16789 | Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230731 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2305.16504"
},
{
"id": "2308.12519"
},
{
"id": "2306.08640"
},
{
"id": "2305.10601"
},
{
"id": "2304.08244"
},
{
"id": "2307.09288"
},
{
"id": "2306.01116"
},
{
"id": "2305.14318"
},
{
"id": "2306.13304"
},
{
"id": "2304.08354"
},
{
"id": "2306.11489"
},
{
"id": "2306.05301"
},
{
"id": "1908.10084"
},
{
"id": "2306.06624"
},
{
"id": "2305.06849"
},
{
"id": "2305.11554"
},
{
"id": "2212.10560"
},
{
"id": "2305.15334"
},
{
"id": "2305.14233"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2306.15595"
}
] |
2307.16877 | 98 | Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallap- ati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 5878â5882, Hong Kong, China. As- sociation for Computational Linguistics.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M.
Dai, and Quoc V Le. 2022. Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations.
Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021. Answering complex open-domain ques- tions with multi-hop dense retrieval. In International Conference on Learning Representations. | 2307.16877#98 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 99 | Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answer- ing. In Conference on Empirical Methods in Natural Language Processing (EMNLP).
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre- trained transformer language models.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- In International uating text generation with bert. Conference on Learning Representations.
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improv- ing few-shot performance of language models.
# A Experimental Details
# Instruction Model Details | 2307.16877#99 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 100 | # A Experimental Details
# Instruction Model Details
To generate text, we use a high temperature to avoid repetitiveness in sampling, but further leverage top- p sampling (Holtzman et al., 2019) to avoid sam- pling words with very low frequency (which may lead to incoherent text being generated). The val- ues used for all generation parameters are listed below:
Top-p: p = 0.95 ⢠Temperature: t = 0.95 ⢠Seed: s = 0 ⢠Min. new tokens: mintoken = 1 ⢠Max. new tokens: maxtoken = 50
# A.2 Retriever Details
While the retriever remains constant for each task, the number of retrieved passages provided to instruction-following models and fine-tuned FiD varies. Instruction-following models are con- strained by the input context size, hence, they re- ceive fewer passages than fine-tuned FiD. For the conversational QA task, including the conversation history in the prompt further reduces the number of passages that can be incorporated into the input con- text. Despite the varying context sizes of different instruction-following models, we provide a consis- tent number of retrieved passages (denoted by K) for each model within a specific task to maintain fair comparison. The details are as follows: | 2307.16877#100 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 101 | open-domain QA (NQ): K = 8 ⢠multi-hop QA (HotpotQA): K = 8 ⢠conversational QA (TopiOCQA): K = 4
Unlike instruction-following models, FiD is not restricted by input context size. We use the de- fault settings for each dataset â 100 passages for NQ, 50 passages for TopiOCQA, and up to 18 pas- sages for HotpotQA. For HotpotQA, the top 100 reasoning chains produced by the retriever are de- duplicated to generate the final passage set.
# B Prompts details
In Section 4.2, we introduce LLM-based evalu- ations to evaluate the correctness of a model re- sponse w.r.t. the userâs information need. To ac- complish this, we use the prompt template shown in Figure 7, and map âyesâ to 1 and ânoâ to 0. Similarly, Section 5.1 introduces the LLMCritic
System prompt: You are CompareGPT, a machine to verify the correctness of predictions. Answer with only yes/no. | 2307.16877#101 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 102 | System prompt: You are CompareGPT, a machine to verify the correctness of predictions. Answer with only yes/no.
You are given a question, the corresponding ground-truth answer and a prediction from a model. Compare the "Ground-truth answer" and the "Prediction" to determine whether the prediction correctly answers the question. All information in the ground-truth answer must be present in the prediction, including numbers and dates. You must answer "no" if there are any specific details in the ground-truth answer that are not mentioned in the prediction. There should be no contradicting statements in the prediction. The prediction may contain extra information. If the prediction states something as a possibility, treat it as a definitive answer.
Question: {Question} Ground-truth answer: {Reference answer} Prediction: {Model response}
CompareGPT response:
Figure 7: The prompt template used for correctness evaluation.
System prompt: You are CompareGPT, a machine to verify the groundedness of predictions. Answer with only yes/no.
You are given a question, the corresponding evidence and a prediction from a model. Compare the "Prediction" and the "Evidence" to determine whether all the information of the prediction in present in the evidence or can be inferred from the evidence. You must answer "no" if there are any specific details in the prediction that are not mentioned in the evidence or cannot be inferred from the evidence. | 2307.16877#102 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 103 | Question: {Question} Prediction: {Model response} Evidence: {Reference passage} CompareGPT response:
Figure 8: The prompt template used for faithfulness evaluation.
Evaluation method for calculating the faithfulness of the models w.r.t. relevant knowledge. To run this evaluation, we used the prompt shown in Figure 8.
Furthermore, we conducted other experiments to study the answer abstience of the models in Sec- tion 5.3. The template used in these experiments is shown in Figure 9.
Category Subcategory Count Percentage Enumeration of Reference Answers Granularity Discrepancies Granularity Discrepancies Incomplete Reference Answers Incomplete Reference Answers Incorrect Gold Answers Intrinsic Ambiguity in Questions Semantic Equivalence Semantic Equivalence Semantic Equivalence Sufficient Subset Symbolic Equivalence Enumeration of Reference Answers Temporal granularity discrepancy Spatial granularity discrepancy List of Named Entities Open-ended Questions Incorrect Gold Answers Ambiguous Questions Multinominal Entities Synonymous Answers More Elaborate Answers Sufficient subset Symbolic Equivalence 21 4 10 13 41 4 12 1 8 163 10 6 7.17 1.37 3.41 4.44 13.99 1.37 4.10 0.34 2.73 55.63 3.41 2.05 | 2307.16877#103 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 104 | Table 7: Percentage share and exact counts of F1 failure cases by sub-category. See Section 4.3 for more details.
Please answer the following question given the following passages. If the answer is not in the passages or cannot be inferred from the passages, respond as âI donât knowâ. - title: {Passage title} {Passage text} - title: {Passage title} {Passage text} ... Question: {Question} Answer:
# D Human Evaluation
Section 4 and Section 5 describe the human eval- uation procedures for both correctness of the re- sponses w.r.t. information need and faithfulness of the models w.r.t. relevant knowledge.
Table 8 demonstrates the quantitative results on the 100 samples picked for human evaluation using all studied correctness metrics. Similarly, the faith- fulness results on the 50 samples are presented in Table 9.
Figure 9: The prompt template used for faithfulness w.r.t irrelevant knowledge.
# E Failure Cases of Models in Faithfulness w.r.t Irrelevant Knowledge
# C Failure Cases of Metrics
Lexical-based metrics Figure 4 presents an overview of the F1 metric failures; the exact per- centages and counts can be found in Table 7. | 2307.16877#104 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 105 | # C Failure Cases of Metrics
Lexical-based metrics Figure 4 presents an overview of the F1 metric failures; the exact per- centages and counts can be found in Table 7.
GPT4-Eval To better understand how GPT4- Eval fails compared to F1, we took the subset of annotated failure cases (described in Section 4.3) where GPT-4Eval also predicts 0; In total, we found 70 instances out of overall 296 samples. Figure 10 shows the distribution of failure subcategories for the GPT-4Eval subset. We observe that a higher proportion of failures are caused by open-ended questions, whereas more elaborated answers and enumeration of reference answers are less penal- ized by GPT4-Eval compared to the remaining fail- ures shown in Table 7. Moreover, all other subcate- gories now have a higher proportion due to the gap left by more elaborate answers and enumeration of reference answers. To illustrate the new findings, we include a few samples in Figure 11. | 2307.16877#105 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 106 | Results illustrated in Table 6 show that models sometimes perform differently given relevant or irrelevant knowledge. Figure 12 demonstrates the failure examples of the studied models in all three QA datasets. It can be observed that given an ir- relevant passage, models (especially Alpaca) do not refrain from answering. Moreover, failure ex- amples presented in Figure 13 show that GPT-3.5 has difficulty in generating responses even when the correct information is available in the provided knowledge.
# Subcategory
Ambiguous Ques. Incorrect Gold Ans. Multinominal entities, 1.4% More Elaborate Ans. Sufficient subset} Spatial granularity Temporal granularity List of Named Entities 14.3% Open-ended Ques. 37.1% ° wu 10 15 20 25 Category Incomplete Reference Answers [Mf Granularity Discrepancies [J Sufficient Subset
Incomplete Reference Answers [Mf Granularity Discrepancies [J Sufficient Subset MM Semantic Equivalence {J Incorrect Gold Answers Intrinsic Ambiguity in Questions
Figure 10: Distribution of failure cases of GPT4-Eval by sub-category. It struggles the most with Open-ended Questions. | 2307.16877#106 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 107 | EM F1 Recall Recall (S) METEOR Rouge-L BertS (F1) BEM GPT4-Eval Dataset Model NQ FiD GPT-3.5 Flan-T5 Alpaca-7B 66.00 1.0 65.00 11.0 70.97 21.21 72.19 26.51 72.83 87.10 77.73 59.87 72.0 83.00 75.00 51.0 58.76 38.45 58.56 30.07 70.33 19.77 71.05 26.44 94.88 84.62 94.74 85.52 75.18 91.74 80.36 67.82 72.0 89.00 81.00 64.0 HotpotQA FiD GPT-3.5 Flan-T5 Alpaca-7B 55.00 8.0 65.00 21.0 68.71 27.25 83.58 41.95 68.73 78.83 84.67 69.0 63.0 77.00 76.00 62.0 52.61 39.91 62.62 42.22 68.52 26.25 83.31 41.89 94.53 85.52 96.01 88.43 74.78 89.19 87.72 78.04 70.0 81.00 86.00 68.0 | 2307.16877#107 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 110 | K-F1 â K-Precision â K-BertS (F1) â FaithCritic â Q2 (F1) â Q2 (NLI) â LLMCritic â Dataset Model NQ GPT-3.5 Flan-T5 Alpaca-7B 24.57 4.81 9.89 76.53 96.15 80.64 86.70 81.01 83.01 75.85 19.28 33.37 54.44 39.06 35.54 60.77 40.83 37.23 94.87 89.74 82.05 HotpotQA GPT-3.5 Flan-T5 Alpaca-7B 15.91 3.03 11.50 84.7 95.74 87.55 84.05 78.61 83.12 54.23 29.81 49.09 54.93 33.84 45.63 63.21 36.88 51.95 100.00 100.00 87.23 TopiOCQA GPT-3.5 Flan-T5 Alpaca-7B 29.52 24.09 24.58 80.80 91.94 77.76 87.69 86.70 86.31 74.64 59.61 57.56 65.25 69.16 50.89 70.92 73.97 56.62 97.96 93.88 89.8 Human | 2307.16877#110 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 112 | Table 9: Faithfulness results on 50 manually annotated samples. The best score is highlighted in bold and the second best is specified with underline.
Error Type: Incomplete Reference Answers - Open-ended Questions Dataset: TopiOCQA
Question: does she (J.K. Rowling) donate to any charity? Reference answer: Yes - In 2002, Rowling contributed a foreword to âMagicâ, an anthology of fiction published by Bloomsbury Publishing, helping to raise money for the National Council for One Parent Families. Alpacaâs response: Yes, she does. She is a member of both English PEN and Scottish PEN, and has donated the proceeds from several of her books to various charities.
Error Type: Incomplete Reference Answers - List of Named Entities Dataset: Natural Questions
Question: what kind of animals live in the great victoria desert Reference answer: malleefowl; water-holding frog; great desert skink; sandhill dunnart; dingo; sand goanna; crest-tailed mulgara; southern marsupial mole; chestnut-breasted whiteface; perentie Flan-T5âs response: lizard.
Error Type: Intrinsic Ambiguity in Questions Dataset: Natural Questions | 2307.16877#112 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 113 | Error Type: Intrinsic Ambiguity in Questions Dataset: Natural Questions
Question: who played in the stanley cup finals last year Reference answers: Nashville Predators, Pittsburgh Penguins GPT-3.5âs response: The Washington Capitals and the Vegas Golden Knights played in the Stanley Cup Finals last year (2018).
Figure 11: Examples where GPT-4Eval predictions differ from human judgement (alongside F1 < 0.3).
# Dataset: NQ
Question: points on a sphere or angles in a circle are measured in units called Reference answer: radians Responses: Alpaca: radians Flan-T5: I donât know GPT-3.5: degrees Irrelevant knowledge: Title: Inscribed angle inscribed angle theorem states that an angle θ inscribed in a circle is half of the central angle 2θ that subtends the same arc on the circle. Therefore, the angle does not change as its vertex is moved to different positions on the circle... | 2307.16877#113 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 114 | Question: who played the colorado kid in rio bravo Reference answer: Ricky Nelson Responses: Alpaca: The role of the Colorado Kid was played by John Dehner in the 1959 film Rio Bravo. Flan-T5: Jim J. Bullock GPT-3.5: I donât know. Irrelevant knowledge: Title: Jim J. Bullock Jim J. Bullock James Jackson Bullock (born February 9, 1955), sometimes credited as Jm J. Bullock, is an American actor and comedian of stage, television and motion pictures... Bullock became a notable entertainment figure in the 1980s
Dataset: HotpotQA
Question: Ralph Hefferline was a psychology professor at a university that is located in what city? Reference answer: New York City Responses: Alpaca: Columbia, Missouri. Flan-T5: Columbia GPT-3.5: I donât know. Irrelevant knowledge: Title: University of Missouri The University of Missouri (also, Mizzou, or MU) is a public land-grant research university located in Columbia, Missouri, U.S. It was founded in 1839 ...
Dataset: TopiOCQA | 2307.16877#114 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 115 | Dataset: TopiOCQA
Question: User: what is the location of mars in the solar system Agent: Mars is the fourth planet from the Sun User: is it known by any other names? Reference answer: Red Planet Responses: Alpaca: No, it is known by its official name, Mars. Flan-T5: Mars is also known as the Red Planet. GPT-3.5: I donât know. Irrelevant knowledge: Title: Haboob Global dust storms on Mars have been compared to haboobs on Earth.
Figure 12: Failure examples of the models in generating I donât know in the presence of irrelevant knowledge.
# Dataset: NQ
Question: points on a sphere or angles in a circle are measured in units called Reference answer: radians Responses: Alpaca: 1. radians Flan-T5: radians GPT-3.5: I donât know. Relevant knowledge: Title: Position angle ... It is calculated in a plane that contains the sphere center and the great circle, where is the angular distance of two points viewed from the center of the sphere, measured in radians... | 2307.16877#115 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 116 | Question: who had a baby at 100 in the bible Reference answer: 1. Sarah, 2. Abraham Responses: Alpaca: 100 year old Sarah had a baby with Abraham. Flan-T5: I donât know. GPT-3.5: Abraham had a baby at 100 in the Bible. Relevant knowledge: Title: The Bible and humor ... Macy says that in speaking of humor in the Bible he usually begins with asking how it might feel to personally get the message Abraham and Sarah received, putting them into the absurd situation of a ninety year old having a baby...
Dataset: HotpotQA
Question: Roger O. Egeberg was Assistant Secretary for Health and Scientific Affairs during the administration of a president that served during what years? Reference answer: 1969 until 1974 Responses: Alpaca: 1969-1974. Flan-T5: 1969 until 1974. GPT-3.5: I donât know. Relevant knowledge: Title: Roger O. Egeberg ... His other roles included Assistant Secretary for Health and Scientific Affairs in the Department of Health, Education, and Welfare (now the United States Department of Health and Human Services) during the Nixon administration and Dean of the University of Southern Californiaâs medical school. - Title: Richard Nixon Richard Milhous Nixon (January 9, 1913 â April 22, 1994) was the 37th President of the United States from 1969 until 1974, ... | 2307.16877#116 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16877 | 117 | Dataset: TopiOCQA
Question: User: how much did the blues brothers movie cost to make Reference answer: $17.5 million Responses: Alpaca: 17.5 million Flan-T5: $17.5 million GPT-3.5: I donât know. Relevant knowledge: Title: The Blues Brothers (film) Principal photography began in July 1979, with the filmâs budget still not settled. For the first month, things ran smoothly on and off the set. When Weiss saw the supposedly final $17.5 million budget, he reportedly joked, "I think weâve spent that much already." ...
Figure 13: Failure examples of the models in generating informative responses in the presence of relevant knowledge. | 2307.16877#117 | Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | http://arxiv.org/pdf/2307.16877 | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | cs.CL, cs.AI | null | null | cs.CL | 20230731 | 20230731 | [
{
"id": "2201.08239"
},
{
"id": "2004.04906"
},
{
"id": "2304.03277"
},
{
"id": "2203.02155"
},
{
"id": "2012.14983"
},
{
"id": "2205.14334"
},
{
"id": "2305.18654"
},
{
"id": "2112.11446"
}
] |
2307.16125 | 1 | 1Tencent AI Lab 2ARC Lab, Tencent PCG
https://github.com/AILab-CVC/SEED-Bench
# Abstract
Based on powerful Large Language Models (LLMs), recent generative Multi- modal Large Language Models (MLLMs) have gained prominence as a pivotal research area, exhibiting remarkable capability for both comprehension and gen- eration. In this work, we address the evaluation of generative comprehension in MLLMs as a preliminary step towards a comprehensive assessment of generative models, by introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple choice questions with accurate human annotations (Ã6 larger than existing benchmarks), which spans 12 evaluation dimensions including the comprehension of both the image and video modality. We develop an advanced pipeline for generating multiple-choice questions that target specific evaluation dimensions, integrating both automatic filtering and manual verification processes. Multiple-choice questions with groundtruth options derived from human annotation enables an objective and efficient assessment of model performance, eliminating the need for human or GPT intervention during evaluation. We further evaluate the performance of 18 models across all 12 dimensions, covering both the spatial and temporal understanding. By revealing the limitations of existing MLLMs through evaluation results, we aim for SEED-Bench to provide insights for motivating future research. We will launch and consistently maintain a leaderboard to provide a platform for the community to assess and investigate model capability.
# Introduction | 2307.16125#1 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 2 | # Introduction
In recent years, Large Language Models (LLMs) [1, 2, 3, 4, 5] have exhibited remarkable capabilities to understand, reason, and generate texts across a variety of open-ended tasks. Leveraging the strong generality of LLMs, generative Multimodal Large Language Models (MLLMs) [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21] have demonstrate enhanced abilities for multimodal comprehension and generation. However, current MLLMs mainly evaluate their performance with a limited number of qualitative examples, or by employing previous benchmarks that are not tailored for evaluating MLLMs with open-form output. For example, in VQAv2 [22], an answer is considered correct only if the modelâs output exactly matches the groundtruth answer, which typically consists of just one or two words. The lack of a comprehensive and objective benchmark to evaluate MLLMs poses a significant challenge for comparing and investigating the performance of various models. | 2307.16125#2 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 3 | Concurrent works [23, 24, 25, 26] have made efforts to develop benchmarks for specifically evaluating MLLMs as shown in Table 1. For example, LVLM-eHub [25] and LAMM [24] utilize exiting public datasets across various computer vision tasks as evaluation samples, and employ human annotators or GPT to assess the quality, relevance, and usefulness of modelâs predictions. However, the involvement
Equal Contribution. â Correspondence to [email protected] and [email protected].
# Action | 2307.16125#3 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 4 | Equal Contribution. â Correspondence to [email protected] and [email protected].
# Action
ot |. Scene Understanding as r,t sts © oo® oe @" vs * es > .o x â iy, oe von Predi oo Action Prediction ⢠Instance Attribute fe / 6 VideoChat 37.63 j se? \ 7 mPLUG-Owlal 34.01 Recognition | Instance Location 8 Otter 33.91 I SEED-Bench 9 LLaVaial 33.52 eh | 10 evn 33.48 \ 1 MultiModal-GPT 33.15 \ 12 OpenFlamingow 33.14 roo 13 LLaMA-Adapter V2. 32.73 (OCR, 14 Video-ChatGPT= 31.17 Text Recognition > Instance C) 15 Valley 3032 en 16 Vicuna 7 28.50 Meg s PP 7 Flant5)2 27.65 Sp oo 18 LlaMA 26.75 e Instance Interaction 2:UM :imagelim E:VideoLLM | 2307.16125#4 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 5 | 6 VideoChat 37.63 7 mPLUG-Owlal 34.01 8 Otter 33.91 9 LLaVaial 33.52 10 evn 33.48 1 MultiModal-GPT 33.15 12 OpenFlamingow 33.14 13 LLaMA-Adapter V2. 32.73 14 Video-ChatGPT= 31.17 15 Valley 3032 16 Vicuna 7 28.50 7 Flant5)2 27.65 18 LlaMA 26.75 2:UM :imagelim E:VideoLLM
Figure 1: Left: Overview of 12 evaluation dimensions in SEED-Bench including both the spatial and temporal understanding, where the number in the bar denotes the number of human-annotated multiple-choice questions in each dimension. Right: the overall leaderboard displaying the averaged accuracy of 18 models across 12 evaluation dimensions. | 2307.16125#5 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 6 | of human and GPT during evaluation not only compromises efficiency, but also leads to increased subjectivity and reduced accuracy of the assessment. MME [23] and MMBench [26] further advance objective evaluation of MLLMs by constructing True/False Questions or Multiple-Choice Questions, which cover a variety of ability dimensions. Restricting the modelâs output to True/False or A/B/C/D options facilitates the convenient computation of accuracy, which serves as an objective metric for evaluation. However, the relatively small scale of these benchmarks (fewer than 3K samples) introduces instability in the evaluation statistics.
In this work, we focus on evaluating the generative comprehension capability of MLLMs as a preliminary step towards a comprehensive assessment of generative models, by introducing a bench- mark named SEED-Bench*. SEED-Bench spans 12 evaluation dimensions across both image and video modalities as shown in Fig. 1. SEED-Bench consists of 19K multiple choice questions with groundtruth answers derived from human annotation (Ã9 larger than MME and Ã6 larger than MM- Bench) as shown in Fig. 2. We design a sophisticated pipeline for the generation of multiple-choice questions that are tailored to evaluate specific dimensions. We further incorporate automated filtering mechanism and manual verification process to ensure the quality of questions and the accuracy of groundtruth answers. | 2307.16125#6 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 7 | Specifically, for images, we utilize various foundation models to extract their visual information in- cluding image-level captions [6, 27], instance-level descriptions [28, 29, 30] and textual elements [31]. For videos, we leverage the original human annotations to provide visual information. We then feed the visual information to ChatGPT/GPT-4 with specially designed prompts corresponding to specific evaluation dimension. ChatGPT/GPT-4 subsequently generates questions as well as four candidate options with one groundtruth answer. We further filter out questions that can be answered without the visual input through utilizing multiple LLMs. Finally, we employ human annotators to choose the correct option of each multiple-choice question and classify each question into one evaluation dimension, resulting in a clean and high-quality benchmark containing 19K multiple-choice questions.
*In pursuit of Artificial General Intelligence (AGI), LLMs have witnessed substantial progress. We have made a bold assumption that the premise for the emergence of multimodal capabilities is to unify both comprehension and generation within an autoregressive generative model, where SEED [18] takes a modest step. Besides the exploration of models, it is essential to have appropriate evaluations that motivate research directions. Therefore, we concurrently propose SEED-Bench to evaluate the comprehension ability of generative models.
2 | 2307.16125#7 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 8 | scene Understanding instance ent instance tribute âWhat isthe weather ike inthe image? âWhat kind of animal is visible inthe image? âWhat is the material ofthe table? Ata sunny da. A.torse AMarble Aste 8.cow 8. Wood C.tsraining heavy Sheep cass Data oud dy. Goat o-Ps instance Location instance counting EF Spatial Relation ba Whereis th dog located nthe ng oom? How many people area the event? Whereis the tre in elation tothe house? fe BI) onthe fireplace AL Ale front ofthe house baad 8.onthe table < <i 6. Behind the house Â¥ onthe chair â< ia C inside the house D.ontherug 33 O-Leto the house Text Recognition âoc Instance interaction FX} (What's the relation between a player and areferee?| âWhat can we infer about the situation? 8 whats the main warning onthe sign? © A. Danot enter [A The players shaking hands witha referee y { [A-They are admiring the engine 8. They are experiencing car trouble 8, Dead end road i BC. The player is receiving an award from a referee JS they are having 2 picnic . Beware of bears ©. The player is shown | 2307.16125#8 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 9 | 8, Dead end road i BC. The player is receiving an award from a referee JS they are having 2 picnic . Beware of bears ©. The player is shown a card by a referee D. They ae washing the ear 0. Trail closed \What is the action being carried outin the video? [A Throwing something in the ar and letting it fll 8. Throwing something in the ir and catching it Citing up one end of something, then letting it drop down . Poking something so that falls over \What action do you anticipate following the end ofthis video? A Stir potatoes 8. Wash potatoes Add potatoes Slice potatoes Can you recognize the actions that occur in this vdeo and list them in order? |A.Cook breakfast, switch stove on, close fridge, carry milk, peel banana 8B. Scoop ice cream, squeeze chocolate syrup, pour sprinkles close fridge C.Clse fridge, carry milk, screw open milk cap, pour mil, screw close mik cap 1. Reach for cereal box, grab bow, pour mil, stir cereal, close fridge | 2307.16125#9 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 10 | Figure 2: Data samples of SEED-Bench, which covers 12 evaluation dimensions including both the spatial and temporal understanding. Each evaluation dimension contains multiple-choice questions with groundtruth options derived from human annotation.
Table 1: Comparisons between existing benchmarks for Multimodal LLMs. âH/G Evaluationâ denotes whether human or GPT is used for evaluation.
Benchmark Visual Modality Customized Question â â â â â #Answer Annotation Answer Type H/G Evaluation #Models MME [23] LAMM [24] LVLM-eHub [25] MMBench [26] Ours Image Image & Point cloud Image Image Image & Video 2194 - - 2974 19242 Y/N free-form free-form free-form A/B/C/D N/A GPT Human GPT N/A 10 4 8 14 18
Our pipeline supports the scalability of evaluation data across multiple domains, and we will continue to expand the benchmark with more evaluation dimensions. | 2307.16125#10 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 11 | Our pipeline supports the scalability of evaluation data across multiple domains, and we will continue to expand the benchmark with more evaluation dimensions.
Based on SEED-Bench, we comprehensively evaluate 18 models including LLMs, ImageLLMs and VideoLLMs across all 12 dimensions as shown in Fig. 1. Different from MMBench [26] that employs ChatGPT to match a modelâs prediction to one of the choices in a multiple-choice question (achieves only 87.0% alignment rate), we follow GPT-3 [32] to calculate log-likelihood for each candidate option and select the one with the highest value as the final prediction, without relying on the instruction-following capabilities of models to output âAâ or âBâ or âCâ or âDâ. By analyzing the results across 12 dimensions, we conduct a comprehensive comparison of existing multimodal models in both spatial and temporal understanding capabilities. We observe that the majority of MLLMs still exhibit limited performance across all 12 evaluation dimensions, and surprisingly find that VideoLLMs fail to achieve competitive performance on temporal understanding compared with ImageLLMs. Through the evaluation results, we aim for SEED-Bench to provide insights for motivating future exploration of a more advanced MLLM. We will launch an evaluation platform and consistently maintain a leaderboard for assessing and comparing model performance.
3
# 2 Related Work | 2307.16125#11 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 12 | 3
# 2 Related Work
Multimodal Large Language Models. With the impressive success of Large language models (LLM) [1, 5, 4], recent studies work on generative Multimodal Large Language Models (MLLMs) [6, 7, 8, 9, 10, 11, 12, 13, 14, 18, 19, 20, 21] to improve multimodal comprehension and generation through utilizing the strong generality of LLMs. Some work [15, 16, 17] further considers video inputs and leverage the vast capabilities of LLMs for video understanding tasks. In SEED-Bench, we provide a comprehensive quantitative evaluations of these models to thoroughly assess and compare their performance in generative comprehension. | 2307.16125#12 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 13 | Benchmarks for Multimodal Large Language Models. With the rapid development of Multi- modal Large Language Models (MLLMs), some concurrent works [23, 24, 25, 26] propose various benchmarks for evaluating MLLMs. For example, GVT [33] constructs a benchmark by aggregating two semantic-level understanding tasks (VQA and Image Captioning) and two fine-grained tasks (Object Counting and Multi-class Identification). But its evaluation is constrained to limited aspects of visual understanding. LVLM-eHub [25] combines multiple existing computer vision benchmarks and develops an online platform, where two models are prompted to answer a question related to an image and human annotators are employed to compare the predictions of models. The involvement of human annotators during evaluation not only introduces bias but also incurs significant costs. LAMM [24] evaluates image and point cloud tasks by using entity extraction to obtain key answers from open-form predictions and utilizing GPT to evaluate the answersâ relevance and accuracy to the groundtruth. The reliance on entity extraction and GPT metric can impact the accuracy and reliability of the evaluation. MME [23] and MMBench | 2307.16125#13 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 14 | to the groundtruth. The reliance on entity extraction and GPT metric can impact the accuracy and reliability of the evaluation. MME [23] and MMBench [26] aim to enhance the objective evaluation of MLLMs by constructing 2914 True/False Questions and 2974 Multiple Choice Questions across a variety of ability dimensions respectively. Considering the relatively small scale of these benchmarks, their evaluation results may exhibit instability. In this work, we introduce SEED-Bench to provide objective and comprehension evaluation of MLLMs, which contains 19K multiple-choice questions covering 12 evaluation dimensions including both spatial and temporal understanding. | 2307.16125#14 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 15 | # 3 SEED-Bench
Our benchmark contains 19K multiple-choice questions with accurate human annotations spanning 12 evaluation dimensions including both the spatial and temporal understanding. In this section, we first present the evaluation dimensions of SEED-Bench in Sec. 3.1. We introduce the data source in Sec. 3.2 and our pipeline for constructing multiple-choice questions in Sec. 3.3. We finally describe the evaluation strategy for MLLMs to answer multiple-choice questions in Sec. 3.4.
# 3.1 Evaluation Dimensions
In order to comprehensively assess the visual understanding capability of MLLMs, SEED-Bench incorporates 12 evaluation dimensions including both the spatial and temporal comprehension as shown in Table 2.
Spatial Understanding. For the evaluation of spatial comprehension, we consider 9 dimensions covering image-level and instance-level perception and reasoning.
⢠Scene Understanding. This dimension focuses on the global information in the image. Questions can be answered through a holistic understanding of the image.
⢠Instance Identity. This dimension involves the identification of a certain instance in the image, including the existence or category of a certain object in the image. It evaluates a modelâs object recognition capability.
⢠Instance Attributes. This dimension is related to the attributes of an instance, such as color, shape or material. It assesses a modelâs understanding of an objectâs visual appearance. | 2307.16125#15 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 16 | ⢠Instance Location. This dimension concerns the absolute position of one specified instance. It requires a model to correctly localize the object referred to in the question.
⢠Instances Counting. This dimension requires the model to count the number of a specific object in the image. This requires the model to understand all objects, and successfully count the referred objectâs instances.
4
Table 2: Evaluation dimensions of SEED-Bench including both the spatial and temporal understand- ing. We omit the image in the sample questions. | 2307.16125#16 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 17 | Evaluation Dimensions Sample Questions 1. Scene Understanding What is the weather like in the image? A. Itâs a sunny day B. Itâs foggy C. Itâs raining heavily D. Itâs a cloudy day 2. Instance Identity What kind of animal is visible in the image? A. Horse B. Cow C. Sheep D. Goat 3. Instance Attribute What is the material of the table? A. Marble B. Wood C. Glass D. Plastic 4. Instance Location Where is the dog located in the living room? A. On the fireplace B. On the table C. On the chair D. On the rug 5. Instance Counting How many people are there in the image? A. 1 B. 2 C. 4 D. 3 6. Spatial Relation What is the tree in relateion to the house? A. In front of the house B. Behind the house C. Inside the house D. Left to the house 7. Instance Interaction What is the relation between a player and a referee? A. The player is shaking hands with a referee B. The player is arguing with a referee C. The player is receiving an award from a referee D. The player is shown a card by a referee 8. Visual Reasoning what can we infer about the | 2307.16125#17 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 18 | C. The player is receiving an award from a referee D. The player is shown a card by a referee 8. Visual Reasoning what can we infer about the situation? A. They are admiring the engine B. They are experiencing car trouble C. They are having a picnic D. They are washing the car 9. Text Recognition What is the main warning on the sign? A. Do not enter B. Dead end road C. Beware of bears D. Trail closed 10. Action Recognition What is the action being carried out in the video? A. Throwing something in the air and letting it fall B. Throwing something in the air and catching it C. Lifting up one end of something, then letting it drop down D. Poking something so that it falls over 11. Action Prediction What action do you anticipate following the end of this video? A. Stir potatoes B. Wash potatoes C. Add potatoes D. Slice potatoes 12. Procedure Understanding Can you recognize the actions in this video and list them in order? A. Cook breakfast, switch stove on, close fridge, carry milk, peel banana B. Scoop ice cream, squeeze chocolate syrup, pour sprinkles, close fridge C. Close fridge, | 2307.16125#18 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 20 | # Spatial Understanding
# Temporal Understanding
⢠Spatial Relation. This dimension asks an model to ground the two mentioned objects, and recognize their relative spatial relation within the image.
⢠Instance Interaction. This dimension requires the model to recognize the state relation or interaction relations between two humans or objects.
⢠Visual Reasoning. This dimension evaluates if a model is able to reason based on the visual information. This requires the model to fully understand the image and utilize its commonsense knowledge to correctly answer the questions.
⢠Text Understanding. For this dimension, the model should answer question about the textual elements in the image.
Temporal Understanding. For the evaluation of temporal comprehension, we consider 3 dimensions focusing on the recognition, prediction and procedure understanding of actions.
Action Recognition. In this dimension, the model is required to recognize the action shown in the videos. Not only the ability of capture temporal dynamics, but also the knowledge of physical motions, human actions and dynamic interaction between objects is evaluated. ⢠Action Prediction. The target of this dimension is to predict the future action through the preceding video segment, which requires the understanding of contextual information from videos and temporal reasoning.
⢠Procedure Understanding. This dimension requires the model to capture all the key actions and perform temporal ordering on them. We aims to evaluate the ability of temporally fine-grained understanding and procedure reasoning.
5 | 2307.16125#20 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 21 | (a) Question/Answer Generation | image Captioning A person holding a board standing on a street (BLIP2 & Tag2text) A person is holding a white board and another person... Dense Captioning A person holding a white board (0.4, 0.05, 0.65, 1.0] (GRiT) Awhite board with texts on it [0.2, 0.4, 0.7, 0.95] Object Detection Person (0.1, 0.5, 0.15, 0.5] | (SAM) > Person (0.1, 0.1, 0.15, 0.5... Attribute Detection Person (0.1, 0.1, 0.15, 0.5] old, standing | (Vinvt) > Street (0.0, 0.1, 0.15, 1.0] grey, empty ... Image From CC3M Text Detection "Tax the rich" (0.25, 0.5, 0.62, 0.5] (PaddleOcR) "20 Brackets-$20 Millionâ (0.18, 0.85, 0.75, 0.84] ... Prompts for Question Generation J Based on the above information, create several Visual Information multiple-choice questions. Each question should What is the main topic of the sign held | 2307.16125#21 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 22 | ... Prompts for Question Generation J Based on the above information, create several Visual Information multiple-choice questions. Each question should What is the main topic of the sign held by the man in the image? have four choices with one correct answer ... A. Environmentalism B. Anti-government C. Taxation D. Education Answer: C Prompts for each evaluation dimension Â¥t, ChatGPT/GPT-4 (b) Question/Answer Verification âWhat is the main topic of the sign held by the man in the image? B Lâ] Ly A. Environmentalism B. Anti-government âââ. ° â_r i C. Taxation D. Education Answer: C rT) Questions and answers generated in Step (a) Automatic Filtering Human Annotation SEED-Bench | 2307.16125#22 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 23 | Figure 3: Overview of SEED-Bench pipeline for generating multiple-choice questions of images. (a) We first leverage various foundation models to extract visual information including image-level captions, instance-level descriptions and textual elements. Based on specially designed prompts corresponding to specific evaluation dimension, ChatGPT/GPT-4 subsequently generates questions and four candidate options with one groundtruth answer. (b) We further filter out questions by utilizing LLMs and employ human annotators to select the correct option and classify each question into one evaluation dimension.
# 3.2 Data Source
To create a benchmark with various evaluation dimensions, we need to collect data containing images with abundant visual information and videos with rich temporal dynamics, so that we can construct diverse challenging multiple-choice questions. In SEED-Bench, we use CC3M [34] dataset with filtered samples to build questions for spatial understanding. Specifically, considering the noisy original captions of CC3M, we generate captions for each image with Tag2Text [27]. We filter out those images with no more than 5 nouns in their captions, so as to ensure the information richness in the remaining images for constructing questions. | 2307.16125#23 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 24 | We further adopt Something-Something-v2 (SSV2) [35], Epic-kitchen 100 [36] and Breakfast [37] dataset to build questions for temporal understanding. SSV2 is an action recognition dataset including 174 fine-grained categories of basic actions with everyday objects and we adopt 1740 videos from its validation set. We also select 138 long videos from Epic-kitchen 100 dataset with temporally annotated action labels. Moreover, videos and fine-grained action segmentation annotations in Breakfast dataset [37] are utilized for the procedure understanding task.
# 3.3 Multiple-Choice Questions
As shown in Fig. 3, our pipeline for generating multiple-choice questions involves question/answer generation and verification. For generating question/answer pairs, we first leverage various foundation models to extract visual information including image-level captions, instance-level descriptions and textual elements. Based on specially designed prompts corresponding to specific evaluation dimension, ChatGPT/GPT-4 subsequently generates questions and four candidate options with one groundtruth answer. For verifying question/answer pairs, we filter out questions that can be answered correctly by multiple LLMs without resorting to visual information. We further employ human annotators to select the correct option and classify each question into one evaluation dimension.
6 | 2307.16125#24 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 25 | 6
Visual Information Extraction. For constructing questions related to spatial understanding, we interpret the rich information in each image with texts using multiple pretrained models, so that ChatGPT/GPT-4 can understand the image and create questions accordingly. For constructing questions related to temporal understanding, considering that extracting reliable temporal information from videos (especially fine-grained actions and long-term temporal context) is extremely difficult given existing foundation models, we utilize the ground-truth annotations of video datasets. We will explore how to generate questions based on automatically extracted video information in the future. The extraction of visual information for images includes the following parts:
⢠Image Captions. Image captions contain the overall description of an image. We employ BLIP2 [38] and Tag2Text [27] to create captions for each image. The former creates captions for the whole image while the latter generates captions based on descriptions of each instance. The two models complement each other to depict the image content within a single sentence. | 2307.16125#25 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 26 | ⢠Instance Descriptions. Besides captions which may ignore specific details in the image, we also extract visual information from images using instance-level descriptions, including object detection, attribute detection, and dense captions. Specifically, we use SAM [29] to segment each instance in the image and obtain their bounding boxes according to the segmentation results. The object labels are obtained using Tag2Text [27]. Besides, we also utilize attribute detector [30] to obtain the attributes of each instance in the image. Finally, we employ GRiT [28] to generate dense captions, which describe each detected instance in the image with a short sentence. These instance-level descriptions are complementary to the image captions, further enriching the visual information of each image.
⢠Textual Elements. Besides objects, the texts in the image also contain important information describing the image. We employ PaddleOCR [31] for detecting textual elements. | 2307.16125#26 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 27 | ⢠Textual Elements. Besides objects, the texts in the image also contain important information describing the image. We employ PaddleOCR [31] for detecting textual elements.
Question-Answer Generation. After extracting visual information from the image and video, we task ChatGPT/GPT-4 with generating multiple-choice questions based on the extracted information or video annotations. For each of the spatial understanding evaluation, we carefully design prompts and ask ChatGPT/GPT-4 to create multiple choice questions with four candidate options based on the extracted visual information. We create questions with ChatGPT for all evaluation dimensions, except for the reasoning dimension, where we use GPT-4 [2] due to its exceptional reasoning capability. For each question, we ask ChatGPT/GPT-4 to create four choices with one correct option and three distractors. We try to make the multiple-choice questions challenging by encouraging the three wrong choices to be similar to the correct one. The detailed prompts of generating multiple-choice questions for different evaluation dimensions are listed in Fig. 4. For generating questions related to temporal understanding, we utilize the ground-truth annotations of selected videos as the answer of multi-choice questions and employ ChatGPT to generate three distractors. | 2307.16125#27 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 28 | Automatic Filtering. Our benchmark aims at evaluating the multimodal vision-language understand- ing capability of MLLMs. However, we observe that some generated questions can be correctly answered by LLMs without seeing the image. We argue that such questions are not helpful to evaluate the visual comprehension capability of MLLMs. To this end, we feed the generated questions (without image) into three powerful LLMs, including Vicuna-7B [4], Flan-T5-XXL [1] and LLaMA-7B [5] and ask them to answer the questions. We empirically found that 5.52% of the generated questions can be correctly answered by all of the three LLMs. We filter out these questions from our benchmark. | 2307.16125#28 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 29 | Human Annotation. To ensure the accuracy and objectiveness of SEED-Bench, we further employ human annotators to verify the generated question/answer pairs. Human annotators are asked to choose the correct answer for each multiple-choice question and categorize each question into one of the evaluation dimension. If one question can not be answered based on the visual input or does not have any correct choice or has multiple correct choices, it will be discarded by human annotators. This results in a clean, high-quality and well-categorized benchmark for evaluation with a total of 19K multiple-choice questions. The statistics of the number of multiple-choice questions in each evaluation dimension is shown in Fig. 1. We can observe a minimum number of questions in text recognition with 85 samples, and a maximum number in instance localization with 4649 samples. We will maintain an even distribution among multiple-choice questions associated with different evaluation dimensions in the future.
7
# Default Instruction: | 2307.16125#29 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 30 | 7
# Default Instruction:
"You are an Al visual assistant that can analyze a single image. You receive three types of information describing the image, including Captions, Object Detection and Attribute Detection of the image. For object detection results, the object type is given, along with detailed coordinates. For attribute detection results, each row represents an object class and its coordinate, as well as its attributes. All coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. Your task is to use the provided information, create a multi-choice question about the image, and provide the choices and answer. | 2307.16125#30 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 31 | Instead of directly mentioning the bounding box coordinates, utilize this data to explain the scene using natural language. Include details like object counts, position of the objects, relative position between the objects. When using the information from the caption and coordinates, directly explain the scene, and do not mention that the information source is the caption or the bounding box. Always answer as if you are directly looking at the image. Create several questions, each with 4 choices. Make the question challenging by not including the visual content details in the question so that the user needs to reason about that first. Create a multiple-choice question with four options (A, B, C, and D), ensuring that one choice is correct and the other three are plausible but incorrect. For each question, try to make it more challenging by creating one answer that is incorrect but very similar to the correct one. Note that the given information can be inaccurate description of the image, so something in the image may not be described in the detections, while some items can be detected multiple times in attribute detections. Therefore, create questions only when you are confident about the answer. Don't explain your choice."
# Scene Understanding Instruction: | 2307.16125#31 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 32 | # Scene Understanding Instruction:
"Create complex questions about the major content of the image. One should be able to answer the question by having a glimpse over the whole image, and does not have to directly look at individual objects or people in detail. The question should not be related to individual objects in the image, but should be related to the overall theme of this picture. "
# Instance Identity Instruction:
"Create complex questions about the identity of objects appeared in the image, such as its type/class or its existence. For example, you may ask "What an object is?" or "Does some object appear in the image?". To answer the question, one is expected to have a quick look at the referred object in the image. â
# Instance Attribute Instruction:
"Create complex questions about the attribute of a certain object, such as its color, shape or fine-grained type. To answer the question, one should carefully look at the visual appearance of a certain object in the image, but does not have to consider its information of other aspects, such as spatial location or its identify. "
# Instance Localization Instruction:
"Create complex questions about the location of a certain object in the image. The question should be created based on the coordinates of the objects. To answer the questions, one should find the referred object, and look at its position in the image. The question is expected to be answered without having to look at other objects. " | 2307.16125#32 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 33 | # Instance Counting Instruction:
"Create questions that involve the number of appearance of a certain object. Start with "How many ....". The choices of the question should be numbers. To answer the question, one should find and count all of the mentioned objects in the image. "
# Spatial Relation Instruction:
"Create questions about spatial relations between two objects. The questions should be mainly based on the coordinates of the two objects. To answer the questions, one should find the two mentioned objects, and find their relative spatial relation to answer the question. "
# Instance Interaction Instruction:
"Create questions about the relations and connections between two objects, such as "What a person is doing to an object" and "What is the relation between two objects". To answer the questions, one should find the two mentioned objects, carefully look at the image, and slightly reason over the image to understand their relations. "
# Visual Reasoning Instruction:
"Create complex questions beyond describing the scene. To answer such questions, one should first understanding the visual content, then based on the background knowledge or reasoning, either explain why the things are happening that way, or provide guides and help to user's request. Make the question challenging by not including the visual content details in the question so that the user needs to reason about that first. "
# Text Recognition Instruction: | 2307.16125#33 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 34 | # Text Recognition Instruction:
"Create questions that is related to the texts in the image. Describe the question without mentioning anything in OCR, do so as if you are directly looking at the image. "
Figure 4: Prompts of generating multiple-choice questions for different evaluation dimensions.
8
/
Table 3: Evaluation results of different models on SEED-Bench, where âSpatialâ shows the averaged performance on nine dimensions for evaluating spatial understanding, and âTemporalâ shows the averaged performance on three dimensions for evaluating temporal understanding. | 2307.16125#34 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 35 | Model Language Model Spatial Temporal Overall Acc Rank Acc Rank Acc Rank LLM Flan-T5 [1] Vicuna [4] LLaMA [5] Flan-T5-XL Vicuna-7B LLaMA-7B 27.32 28.16 26.56 17 16 18 28.56 29.46 27.27 11 8 13 27.65 28.50 26.75 17 16 18 ImageLLM BLIP-2 [6] InstructBLIP [10] InstructBLIP Vicuna [10] LLaVA [8] MiniGPT-4 [7] VPGTrans [40] MultiModal-GPT [12] Otter [11] OpenFlamingo [41] LLaMA-Adapter V2 [42] GVT [33] mPLUG-Owl [9] Flan-T5-XL Flan-T5-XL Vicuna-7B LLaMA-7B Flan-T5-XL LLaMA-7B LLaMA-7B LLaMA-7B LLaMA-7B LLaMA-7B Vicuna-7B LLaMA-7B 49.74 57.80 58.76 36.96 47.40 41.81 34.54 35.16 34.51 35.19 35.49 37.88 3 2 1 8 4 5 | 2307.16125#35 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 36 | 57.80 58.76 36.96 47.40 41.81 34.54 35.16 34.51 35.19 35.49 37.88 3 2 1 8 4 5 12 11 13 10 9 7 36.71 38.31 38.05 23.75 29.89 31.40 29.21 30.35 29.25 25.75 27.77 23.02 3 1 2 16 7 5 10 6 9 14 12 18 46.35 52.73 53.37 33.52 42.84 39.10 33.15 33.91 33.14 32.73 33.48 34.01 3 2 1 9 4 5 11 8 12 13 10 7 VideoLLM VideoChat [15] Video-ChatGPT [16] Valley [17] Vicuna-7B LLaMA-7B LLaMA-13B 39.02 33.88 32.04 6 14 15 33.68 23.46 25.41 4 17 15 37.63 31.17 30.32 6 14 15 | 2307.16125#36 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 37 | # 3.4 Evaluation Strategy
Different from MMBench [26] that employs ChatGPT to match a modelâs prediction to one of the choices in a multiple-choice question (achieves only 87.0% alignment rate), we adopt the answer ranking strategy [10, 32, 39] for evaluating existing MLLMs with multiple-choice questions. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the content of this choice given the question. We select the choice with the highest likelihood as modelâs prediction. Our evaluation strategy does not rely on the instruction-following capabilities of models to output âAâ or âBâ or âCâ or âDâ. Furthermore, this evaluation strategy eliminates the impact of the order of multiple-choice options on the modelâs performance.
# 4 Evaluation Results
# 4.1 Models | 2307.16125#37 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 38 | # 4 Evaluation Results
# 4.1 Models
Based on our SEED-Bench, we evaluate 18 models including 3 LLMs, i.e., Flan-T5 [1], Vi- cuna [4], LLaMA [5], 12 ImageLLMs, i.e., OpenFlamingo [41], BLIP-2 [6], MiniGPT-4 [7], LLaVa [8], mPLUG-Owl [9], InstructBLIP [10], Otter [11], MultimodalGPT [12], GVT [33], PandaGPT [13], VPGTrans [40], LLaMA-Adapter V2 [42], and 3 VideoLLMs, i.e., VideoChat [15], Video-ChatGPT [16] and Valley [17]. Each model is evaluated with all the 12 dimensions including both the spatial and temporal understanding. For ImageLLMs, besides the evaluation of spatial understanding, we aim to investigate their capability to perform temporal reasoning among multiple frames. For VideoLLMs, we seek to explore whether their spatial understanding abilities have degraded by taking a single image as the input.
# 4.2 Results | 2307.16125#38 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 39 | # 4.2 Results
The evaluation results of different models on SEED-Bench are listed in Table. 1, where the accuracy refers to the proportion of correctly answered multiple-choice questions relative to the total number of questions. We are surprised to observe that InstructBLIP [10] not only achieves the best performance based on the averaged results across nine dimensions for evaluating spatial understanding, but also surpasses VideoLLMs in terms of the averaged results across three dimensions for evaluating temporal understanding. We display leaderboards of various evaluation dimensions on SEED-Bench in Fig. 5 to provide a comprehensive assessment of different models. The overall leaderboard based on the
9 | 2307.16125#39 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 40 | Wana 301 Darinencm | ors | ener | 0a a ter a [ee ae | ve) | ee oe eee Video catger | 283 arts [00] oe ae [awa | zea | | tama |e | (1) Scene Understanding (2) Instance Identity (3) Instance Attributes (4) Instance Location en 36.17 | § | videochat | 42.27 3272 | | 7 | UaMarAdapterv2 | 39.18 | eo 33.71 a ss7_| | 8 | fants | 3.98 a 32.82 ow | 8 | werens | sis6_| | 9 | Opentiamingo | 50.25 _| ves | | otter | att peeves 29.67 31.75 uae | as | 27.34 se 30.75 27.30 = nrweon | 27.26 | | 14 | MultiModal-cpr_| 30.14 es otter | 2528 | | 8 | valley | 30.14 uama [2507 | | 16 | video-cnatGer | 2953 | tee 7 | ama | 2877 8 | vicuna | 28.57_| is count Instance Counting (6) Spatial Relations (7) Instance Interaction (8) Visual Reasoning ~wlihodsicer | 3680 3 ew ~~ Wideschat | 3889 omer ara) | 2307.16125#40 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 41 | (6) Spatial Relations (7) Instance Interaction (8) Visual Reasoning ~wlihodsicer | 3680 3 ew ~~ Wideschat | 3889 omer ara) 7s rstuctBu Viena | 3448 | a [ee Jen P| eerie | t | sits? laMArAdapterv2_|_2a71_| OpenFlamingo | 20.00 | InstructBLIP 33.10 toma | 3209 | ve a a al eins | a aus] [sae omaet ae Wana | 2730 5 valley | mo a [waadaperv | 9.65 | tow tants | a6 | Se [uve | as | (9) Text Recognition (10) Action Recognition (11) Action Prediction (12) Procedure Understanding | 2307.16125#41 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 42 | Figure 5: Leaderboards of different evaluation dimensions on SEED-Bench.
10
# Model
Flan-TS - Vicuna â LaMA - BLIP-2 InstructBLIP hl InstructBLIP Vicuna ava Minicer-4 vPGTrans MultiModal-GPT Otter = OpenFlaming LLaMA-Adapter V2 ons mPLUG-Owl VideoChat a Video-ChatGPT â
'
1
ene Instance Instance Instance Instance Spatial__âInstance_â_âVisual Text Action Action Procedure set Understanding Identity Attributes Localization Counting Relations Interaction Reasoning Recognition Recognition Prediction Understanding Evaluation Dimension
Figure 6: Illustration of each modelâs performance across different evaluation dimensions, where darker colors represent higher ranks. | 2307.16125#42 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 43 | Figure 6: Illustration of each modelâs performance across different evaluation dimensions, where darker colors represent higher ranks.
averaged results across all the evaluation dimensions are shown in Fig. 1. To better showcase the the capabilities of models across different evaluation dimensions, we further visualize the ranking of each model within each evaluation dimension in Fig. 6, where darker colors represent higher ranks. We can observe that the BLIP series [6, 10] model achieves competitive results in multiple evaluation dimensions, but they are not good at visual reasoning and action recognition. VideoLLM Valley [17] achieves suboptimal performance in the majority of evaluation dimensions. LLaVa [8] exhibits unparalleled capabilities in the evaluation of text recognition compared to other evaluation dimensions. In terms of specific evaluation dimension, MiniGPT-4 [7] model and mPLUG-Owl [9] model performs better in visual reasoning, while VPGTrans [40] model excels in action recognition and procedure understanding. LLaMA Adapter V2 [42] model shows more proficiency in action recognition. Whatâs more, Multimodal GPT [12], Otter [11], Openflamingo [41], GVT [33], and the three VideoLLMs [15, 16, 17] exhibit balanced strength across various evaluation dimensions.
# 4.3 Analysis | 2307.16125#43 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 44 | # 4.3 Analysis
Through the comprehension and objective evaluation of various models on SEED-Bench, we have observed a number of findings that can bring insights for future work.
Most MLLMs still exhibit limited performance across all 12 evaluation dimensions. As shown in Fig. 1, 5, most MLLMs (except BLIP series models) can not reach 50% accuracy on both average performance and the performance on more than three single evaluation dimension. In some specific evaluation dimension (e.g., visual reasoning), it seems that most MLLMs achieve high accuracy. However, when comparing the performance of MLLMs to LLMs, we observe that the performance improvement of most MLLMs is still relatively limited.
MLLMs achieve relatively high performance on global image comprehension On the evaluation of scene understanding and visual reasoning, the accuracy of most MLLMs is higher than 40%, and all MLLMs outperforms LLMs. This shows that MLLMs are more proficient in global understanding and reasoning of images, compared with other evaluation dimensions that require fine-grained instance-level comprehension. | 2307.16125#44 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 45 | InstructBLIP achieves top performance on 8 of 12 evaluation dimensions. We can observe that InstructBLIP outperforms other models on 8 evaluation dimensions and the possible explanations for this superior performance are as follows. (a) The instruction-tuning data of InstructBLIP contains totally 16M samples (larger than other instruction-tuning datasets), and covers a wide range of multi- modal tasks, even including QA data of OCR and temporal visual reasoning. (b) The weights of LLMs are frozen when performing instruction-tuning of InstructBLIP, which may alleviate catastrophic forgetting. However, InstructBLIP series models still perform poorly on action recognition and
11
# High Rank
# Low Rank
procedure understanding that differ significantly from the instruction-tuning data. For instance, on action recognition that requires the understanding of fine-grained actions in Something-Something-v2, InstructBLIP series models can not achieve significant performance gain compared to LLMs (i.e., lower than 2%). This indicates that InstructBLIP series models may fail to generalize well on the out-of-distribution data. | 2307.16125#45 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 46 | MLLMs show weaker abilities in understanding spatial relationships between objects. The top-ranked model InstructBLIP only achieves 40% accuracy on the evaluation of spatial relations, which shows that recognizing relative spatial relationships between instances is challenging because there can be many possible arrangements and combinations of spatial relationships between instances. Additionally, spatial relationships between objects may cause ambiguity in some cases, making it difficult to determine their relationship.
Most MLLMs show poor performance for text recognition. Apart from InstructBLIP, all other models achieve an accuracy lower than 40% for text recognition due to the lack of textual elements in multimodal pre-training datasets. Since the ability to accurately identify and extract text from images is important, future work should develop models that are better equipped to handle text recognition by pre-training on datasets with rich textual elements in visual data.
VideoLLMs achieve promising results on spatial understanding. For example, VideoChat achieves 39.98% accuracy (ranking 4-th on instance localization, surpassing LLaVa by 11.55% and performing only 3.58% lower than the top-1 model. It shows that VideoChatâs ability of spatial understanding does not degrade by jointly training on both image and video data during the pre-training and instruction-tuning stages. | 2307.16125#46 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 47 | Most MLLMs exhibit unsatisfactory performance on fine-grained temporal understanding. It is notable that on the evaluation of procedure understanding, the top-ranked model, VPGTrans, achieves an accuracy that is only 5% higher than that of LLaMA. The performance improvement of the following 4 MLLMs is even less than 1.2% compared with LLaMA. This demonstrates that it is extremely difficult for both the ImageLLMs and VideoLLMs to perform fine-grained temporal reasoning so that they can recognize and sort the key actions in a video.
VideoLLMs fail to achieve competitive performance on temporal understanding. Although VideoLLMs are instruction-tuned on video data, they do not exhibit a significant advantage on evaluation dimensions for temporal understanding. Surprisingly, two VideoLLMS (Video-ChatGPT and Valley) even perform worse than most ImageLLMs on action recognition, action prediction and procedure understanding. It indicates that the capabilities of existing VideoLLMs for fine-grained action recognition, temporal relationship understanding and temporal reasoning are still limited. Similar concerns about existing VideoLLMs are also presented in recent works [15, 16].
# 5 Conclusion | 2307.16125#47 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 48 | # 5 Conclusion
In this work, we propose a large-scale benchmark SEED-Bench to provide a comprehensive and objective evaluation of Multimodal Large Language Models (MLLMs) on generative comprehension. SEED-Bench consists of 19K multiple-choice questions with accurate human annotations, which covers 12 evaluation dimensions for both the spatial and temporal understanding. We design an advanced pipeline to create multiple-choice questions that target specific evaluation dimensions, facilitating the scalability of evaluation data across a variety of domains. We also integrate automatic filtering and manual verification to improve the quality of the generated questions and answers. We conduct a thorough evaluation of 18 models, analyzing and comparing their performances to provide insights for future research. We plan to launch and consistently maintain a leaderboard, offering a platform for the community to assess model performance. We will continue to further broadening the evaluation dimensions of SEED-Bench with more data.
# Acknowledgements
We sincerely acknowledge Junting Pan (CUHK MMLab) for the insightful suggestions, Zhan Tong (Nanjing University) for the data processing, and Yi Chen (Tencent AI Lab) for the engaging discussions.
12
# References | 2307.16125#48 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 49 | 12
# References
[1] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
[2] OpenAI. Gpt-4 technical report, 2023.
# AaB BON
[3] OpenAI. Introducing chatgpt. https://openai.com/blog/chatgpt, 2022.
[4] FastChat. Vicuna. https://github.com/lm-sys/FastChat, 2023.
[5] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2307.16125#49 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 50 | [6] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. ICML, 2023.
[7] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
[8] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[9] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. | 2307.16125#50 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 51 | [10] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
[11] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023.
[12] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023.
[13] Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to instruction- follow them all. arXiv preprint arXiv:2305.16355, 2023. | 2307.16125#51 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 52 | [14] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023.
[15] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023.
[16] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023.
[17] Ruipu Luo, Ziwang Zhao, Min Yang, Junwei Dong, Minghui Qiu, Pengcheng Lu, Tao Wang, and Zhongyu Wei. Valley: Video assistant with large language model enhanced ability. arXiv preprint arXiv:2306.07207, 2023. | 2307.16125#52 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 53 | [18] Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, and Ying Shan. Planting a seed of vision in large language model. arXiv preprint arXiv:2307.08041, 2023.
[19] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023.
[20] Yu Lili, Shi Bowen, Pasunuru Ram, Miller Benjamin, Golovneva Olga, Wang Tianlu, Babu Arun, Tang Binh, Karrer Brian, Sheynin Shelly, Ross Candace, Polyak Adam, Howes Russ, Sharma Vasu, Xu Jacob, Singer Uriel, Li (AI) Daniel, Ghosh Gargi, Taigman Yaniv, Fazel-Zarandi Maryam, Celikyilmaz Asli, Zettlemoyer Luke, and Aghajanyan Armen. Scaling autoregressive multi-modal models: Pretraining and instruction tuning. 2023. | 2307.16125#53 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 54 | [21] Jing Yu Koh, Daniel Fried, and Ruslan Salakhutdinov. Generating images with multimodal language models. arXiv preprint arXiv:2305.17216, 2023.
[22] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904â6913, 2017.
13
[23] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. | 2307.16125#54 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 55 | [24] Zhenfei Yin, Jiong Wang, Jianjian Cao, Zhelun Shi, Dingning Liu, Mukai Li, Lu Sheng, Lei Bai, Xiaoshui Huang, Zhiyong Wang, et al. Lamm: Language-assisted multi-modal instruction-tuning dataset, framework, and benchmark. arXiv preprint arXiv:2306.06687, 2023.
[25] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023.
[26] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023. | 2307.16125#55 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 56 | [27] Xinyu Huang, Youcai Zhang, Jinyu Ma, Weiwei Tian, Rui Feng, Yuejie Zhang, Yaqian Li, Yandong Guo, and Lei Zhang. Tag2text: Guiding vision-language model via image tagging. arXiv preprint arXiv:2303.05657, 2023.
[28] Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280, 2022.
[29] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023.
[30] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. In CVPR, 2021. | 2307.16125#56 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 57 | [31] https://github.com/PaddlePaddle/PaddleOCR. Paddleocr.
[32] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[33] Guangzhi Wang, Yixiao Ge, Xiaohan Ding, Mohan Kankanhalli, and Ying Shan. What makes for good visual tokenizers for large language models? arXiv preprint arXiv:2305.12223, 2023.
[34] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018. | 2307.16125#57 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 58 | [35] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The" something something" video database for learning and evaluating visual common sense. In ICCV, 2017.
[36] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. Rescaling egocentric vision. arXiv preprint arXiv:2006.13256, 2020.
[37] Hilde Kuehne, Ali Arslan, and Thomas Serre. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In CVPR, 2014.
[38] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, 2022. | 2307.16125#58 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.16125 | 59 | [39] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
[40] Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, and Tat-Seng Chua. Transfer visual prompt generator across llms. abs/23045.01278, 2023.
[41] ml_foundations. Openflamingo. https://github.com/mlfoundations/open_flamingo, 2023.
[42] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023.
14 | 2307.16125#59 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability. | http://arxiv.org/pdf/2307.16125 | Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, Ying Shan | cs.CL, cs.CV | Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Bench | null | cs.CL | 20230730 | 20230802 | [
{
"id": "2306.05424"
},
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.06355"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2006.13256"
},
{
"id": "2305.12223"
},
{
"id": "2307.08041"
},
{
"id": "2212.00280"
},
{
"id": "2305.16355"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2307.06281"
},
{
"id": "2307.05222"
},
{
"id": "2304.10592"
},
{
"id": "2306.06687"
},
{
"id": "2306.14824"
},
{
"id": "2305.17216"
},
{
"id": "2306.07207"
},
{
"id": "2306.09265"
},
{
"id": "2303.05657"
},
{
"id": "2305.03726"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2109.07958"
}
] |
2307.15337 | 0 | 3 2 0 2
t c O 8 ] L C . s c [
2 v 7 3 3 5 1 . 7 0 3 2 : v i X r a
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
SKELETON-OF-THOUGHT: LARGE LANGUAGE MOD- ELS CAN DO PARALLEL DECODING
Xuefei Ning1â [email protected]
Zinan Lin2â [email protected]
# Zixuan Zhou1â [email protected]
# Zifu Wang3 [email protected]
# Huazhong Yang1 [email protected]
Yu Wang1 [email protected]
1 Department of Electronic Engineering, Tsinghua University, Beijing, China 2 Microsoft Research, Redmond, Washington, USA 3ESAT-PSI, KU Leuven, Leuven, Belgium
Website: https://sites.google.com/view/sot-llm
# ABSTRACT | 2307.15337#0 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 0 | # Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support
# Zilin Ma*, MS1, Yiyang Mei*, JD, MPH2, Zhaoyuan Su, MS3, 1Harvard University, Cambridge, MAï¼2Emory University, Atlanta, GA; 3University of California, Irvine, Irvine, CA
Abstract Conversational agents powered by large language models (LLM) have increasingly been utilized in the realm of mental well-being support. However, the implications and outcomes associated with their usage in such a critical field remain somewhat ambiguous and unexplored. We conducted a qualitative analysis of 120 posts, encompassing 2917 user comments, drawn from the most popular subreddit focused on mental health support applications powered by large language models (u/Replika). This exploration aimed to shed light on the advantages and potential pitfalls associated with the integration of these sophisticated models in conversational agents intended for mental health support. We found the app (Replika) beneficial in offering on-demand, non-judgmental support, boosting user confidence, and aiding self-discovery. Yet, it faced challenges in filtering harmful content, sustaining consistent communication, remembering new information, and mitigating users' overdependence. The stigma attached further risked isolating users socially. We strongly assert that future researchers and designers must thoroughly evaluate the appropriateness of employing LLMs for mental well-being support, ensuring their responsible and effective application. | 2307.15810#0 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15833 | 0 | 3 2 0 2
l u J 8 2 ] L C . s c [
1 v 3 3 8 5 1 . 7 0 3 2 : v i X r a
# Dialogue Shaping: Empowering Agents through NPC Interaction
# Wei Zhou, Xiangyu Peng and Mark Riedl
Georgia Institute of Technology, Atlanta, GA, 30332, USA
Abstract One major challenge in reinforcement learning (RL) is the large amount of steps for the RL agent needs to converge in the training process and learn the optimal policy, especially in text-based game environments where the action space is extensive. However, non-player characters (NPCs) sometimes hold some key information about the game, which can potentially help to train RL agents faster. Thus, this paper explores how to interact and converse with NPC agents to get the key information using large language models (LLMs), as well as incorporate this information to speed up RL agentâs training using knowledge graphs (KGs) and Story Shaping.
# Keywords Large Language Model, ChatGPT, Reinforcement Learning, Knowledge Graph, Text adventure game
Large Language Model, ChatGPT, Reinforcement Learning, Knowledge Graph, Text adventure game
# 1. Introduction | 2307.15833#0 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 1 | Website: https://sites.google.com/view/sot-llm
# ABSTRACT
This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to com- plete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-ups across 12 LLMs, but it can also potentially improve the answer quality on several question categories. SoT is an initial attempt at data- centric optimization for inference efficiency, and further underscores the potential of pushing LLMs to think more like a human for answer quality.
# INTRODUCTION | 2307.15337#1 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 1 | Introduction The World Health Organization (WHO) defines mental health as a state of mental well-being that enables people to cope with the stresses of life, realize their abilities, learn well and work well, and contribute to their community. It is an indispensable component of our health that underpins our ability to make decisions1. According to the Center for Disease Control and Prevention (CDC), between August 2020 and February 2021, the percentage of adults exhibiting symptoms of anxiety or depressive disorder rose from 36.4% to 41.5%2. Nearly one in five U.S. adults feel âserious lonelinessâ since the outbreak of the COVID-19 pandemic3. The matter of mental health has become pressing, prompting calls from research institutions and public sectors to increase efforts towards addressing mental well-being4. | 2307.15810#1 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 1 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich Google DeepMind. Authors listed in alphabetical order, with contributions listed in Appendix A. | 2307.15818#1 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 1 | Large Language Model, ChatGPT, Reinforcement Learning, Knowledge Graph, Text adventure game
# 1. Introduction
Reinforcement learning (RL) has demonstrated remark- able effectiveness in solving intricate decision-making tasks, but its trial-and-error approach often leads to slow convergence to the optimal policy. In text-adventure games, NPCs possess crucial information that could spare the agent from extensive trial-and-error. Utilizing this prior knowledge could significantly reduce the agentâs policy search space, making it more efficient by breaking down complex tasks into smaller, focused objectives. For instance, knowing that "killing the dragon" requires a sword allows the agent to concentrate on finding the sword directly, rather than wasting steps exploring how to defeat the dragon.
Large Language Models (LLMs) are incredibly capable of conversational tasks and are highly configurable using prompting techniques. Thus, we chose to use them as the dialogue module responsible for talking to the NPC. Meanwhile, they are not as efficient as RL agent in terms of searching for the optimal chain of actions. Therefore, we chose to keep the RL agent as the main component responsible for searching for the optimal policy while speeding its search using dialogue module that is com- prised of LLMs. | 2307.15833#1 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 2 | # INTRODUCTION
Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023a; Du et al., 2022; OpenAI, 2023; Zheng et al., 2023) have shown exceptional performance in natural language processing and chatbot systems. However, the inference process of the state-of-the-art LLMs is slow, hindering their interactive use. For example, it takes 22 seconds for Claude (Anthropic, 2023) (accessed through Slack API) and 43 seconds for Vicuna-33B V1.3 (a 33B LLaMA-based model, running locally on one NVIDIA A100 GPU) to answer the question in Fig. 1.
We conclude three major causes of LLMsâ slow inference: (1) A large model size requires a large amount of memory, memory access, and computation. For example, the FP16 weights of 175B GPT- 3 take 350GB memory, which means at least 5Ã80GB A100 GPUs are needed to keep the model in GPU memory. Even with enough GPUs, the heavy memory access and computation slow down the inference. (2) The attention operation in the prevailing transformer architecture is I/O bounded and has a quadratic memory and computation complexity in sequence length. (3) The sequential decoding approach in inference generates tokens one by one. This approach introduces a significant | 2307.15337#2 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 2 | Health Informatics researchers have long been exploring how consumer health technologies, such as mobile health apps and online health communities, can promote mental wellness5â9. Among these technologies, conversation agents (CAs) have gained increased attention for their potential to provide mental well-being and social support. Research has shown that using CAs for mental health care can lead to increased accessibility due to benefits such as reduced cost, time efficiency, and anonymity compared to traditional care strategies10. However, many CA systems are still rule-based (i.e., chat with users following a predefined script). They struggle to provide users with human-like interactions, as they cannot offer open-ended conversations tailored to users' emotional needs11, 12. | 2307.15810#2 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 2 | We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control to boost generalization and enable emergent semantic reasoning. Our goal is to enable a single end-to-end trained model to both learn to map robot observations to actions and enjoy the benefits of large-scale pretraining on language and vision-language data from the web. To this end, we propose to co-fine-tune state-of-the-art vision-language models on both robotic trajectory data and Internet-scale vision-language tasks, such as visual question answering. In contrast to other approaches, we propose a simple, general recipe to achieve this goal: in order to fit both natural language responses and robotic actions into the same format, we express the actions as text tokens and incorporate them directly into the training set of the model in the same way as natural language tokens. We refer to such category of models as vision-language-action models (VLA) and instantiate an example of such a model, which we call RT-2. Our extensive evaluation (6k evaluation trials) shows that our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent | 2307.15818#2 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 2 | The RL agent acts as an action module and the LLMs act as a dialogue module. Yet, we still need to find a way to bridge these two modules, i.e. incorporating the information that the dialogue module retrieves into the action module. For this purpose, we turn to the technique
AIIDE-23: The 19th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, October 08â12, 2023, Salt Lake City, UT $ [email protected] (W. Zhou); [email protected] (X. Peng); [email protected] (M. R. )
(ENON)
# © 2023 Author:Pleasefillinthe\copyrightclause macro CEUR Workshop Proceedings (CEUR-WS.org)
CEUR Workshop Proceedings (CEUR-WS.org)
of Story Shaping[1], which is able to guide the action module to imitate the optimal trajectory.
In this paper, we propose Dialogue Shaping, a frame- work that is able to extract useful information through conversation with NPCs, and then convert the informa- tion into knowledge graphs which are then used to speed up RL agentâs convergence to optimal policy by using the Story Shaping technique[1].
# 2. Background and Related Work | 2307.15833#2 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 3 | âEqual contribution. â The main updates in arXiv V2 are as follows: (1) Add the quality and efficiency evaluation of SoT on GPT-4. (2) Use GPT-4 as the judge for answer quality evaluation. The old results with ChatGPT-3.5 as the judge are moved to App. I.3. (3) Add the SoT with Router (SoT-R) method (§ 4) which adaptively triggers SoT on suitable questions. (4) Move detailed answer analysis to the appendices.
1
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#3 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 3 | The recent advancement of Large Language Models (LLMs), which aim to generate coherent text completion to inputs has encouraged the use of conversation agents for healthcare consumers13, 14. LLMs can infer the contexts of the input texts (known as the prompt), and generate texts that coherently follow the prompts. People have used these extraordinary capabilities to build information extraction15, and code generation systems16. LLMs potentially promise to offer mental wellness support to users by offering them open dialogues, which parse the semantics of the user input and therefore interact with the users emotionally. Due to the potential benefits of LLMs, an increasing number of CAs have recently employed LLMs as their underlying structure to provide healthcare consumers with mental wellness and emotional support17â19. However, LLMs also have limitations. For instance, prior research has pointed out that LLM-based CAs can be challenging to control in terms of content output and preventing harmful or false information20. Such limitations could potentially have adverse effects on users' mental well-being. | 2307.15810#3 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
2307.15818 | 3 | (6k evaluation trials) shows that our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training. This includes significantly improved generalization to novel objects, the ability to interpret commands not present in the robot training data (such as placing an object onto a particular number or icon), and the ability to perform rudimentary reasoning in response to user commands (such as picking up the smallest or largest object, or the one closest to another object). We further show that incorporating chain of thought reasoning allows RT-2 to perform multi-stage semantic reasoning, for example figuring out which object to pick up for use as an improvised hammer (a rock), or which type of drink is best suited for someone who is tired (an energy drink). | 2307.15818#3 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study how vision-language models trained on Internet-scale data can be
incorporated directly into end-to-end robotic control to boost generalization
and enable emergent semantic reasoning. Our goal is to enable a single
end-to-end trained model to both learn to map robot observations to actions and
enjoy the benefits of large-scale pretraining on language and vision-language
data from the web. To this end, we propose to co-fine-tune state-of-the-art
vision-language models on both robotic trajectory data and Internet-scale
vision-language tasks, such as visual question answering. In contrast to other
approaches, we propose a simple, general recipe to achieve this goal: in order
to fit both natural language responses and robotic actions into the same
format, we express the actions as text tokens and incorporate them directly
into the training set of the model in the same way as natural language tokens.
We refer to such category of models as vision-language-action models (VLA) and
instantiate an example of such a model, which we call RT-2. Our extensive
evaluation (6k evaluation trials) shows that our approach leads to performant
robotic policies and enables RT-2 to obtain a range of emergent capabilities
from Internet-scale training. This includes significantly improved
generalization to novel objects, the ability to interpret commands not present
in the robot training data (such as placing an object onto a particular number
or icon), and the ability to perform rudimentary reasoning in response to user
commands (such as picking up the smallest or largest object, or the one closest
to another object). We further show that incorporating chain of thought
reasoning allows RT-2 to perform multi-stage semantic reasoning, for example
figuring out which object to pick up for use as an improvised hammer (a rock),
or which type of drink is best suited for someone who is tired (an energy
drink). | http://arxiv.org/pdf/2307.15818 | Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich | cs.RO, cs.CL, cs.CV, cs.LG | Website: https://robotics-transformer.github.io/ | null | cs.RO | 20230728 | 20230728 | [
{
"id": "2304.02643"
},
{
"id": "2305.10403"
},
{
"id": "2206.06336"
},
{
"id": "2203.06173"
},
{
"id": "2112.12143"
},
{
"id": "2205.06230"
},
{
"id": "2201.11903"
},
{
"id": "2005.09382"
},
{
"id": "2210.13431"
},
{
"id": "2203.10421"
},
{
"id": "2212.06817"
},
{
"id": "2303.07280"
},
{
"id": "2203.12601"
},
{
"id": "2107.03374"
},
{
"id": "2303.18240"
},
{
"id": "2206.14858"
},
{
"id": "2301.12507"
},
{
"id": "2005.07648"
},
{
"id": "2303.03378"
},
{
"id": "2107.03380"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2210.03094"
},
{
"id": "2202.01344"
},
{
"id": "2304.08587"
},
{
"id": "2110.14168"
},
{
"id": "2210.00030"
},
{
"id": "2204.14198"
},
{
"id": "2305.15021"
},
{
"id": "2112.01511"
},
{
"id": "1802.01557"
},
{
"id": "2301.12597"
},
{
"id": "2305.05658"
},
{
"id": "1903.03698"
},
{
"id": "2205.06175"
},
{
"id": "2304.08742"
},
{
"id": "2007.04309"
},
{
"id": "2302.12766"
},
{
"id": "2210.06407"
},
{
"id": "2306.00958"
},
{
"id": "1908.03557"
},
{
"id": "2303.00905"
},
{
"id": "2209.05451"
},
{
"id": "2210.10047"
},
{
"id": "2104.13921"
},
{
"id": "2211.11736"
},
{
"id": "2204.01691"
},
{
"id": "2004.13649"
},
{
"id": "1703.09312"
}
] |
2307.15833 | 3 | # 2. Background and Related Work
Reinforcement Learning in Text Games Text games involve turn-based interactions where players read de- scriptions of the gameâs environment in natural language and respond with short text-based actions. These games can be described using partially-observable Markov De- cision Processes, denoted as â¨ð, ð, ð´, ð, â¦, ð
, ð¾â©, rep- resenting possible states, transition probabilities, vocab- ulary for commands, observation probabilities, reward function, and discount factor. The RL agentâs goal is to learn a policy ð(ð) â ð to maximize expected future rewards. | 2307.15833#3 | Dialogue Shaping: Empowering Agents through NPC Interaction | One major challenge in reinforcement learning (RL) is the large amount of
steps for the RL agent needs to converge in the training process and learn the
optimal policy, especially in text-based game environments where the action
space is extensive. However, non-player characters (NPCs) sometimes hold some
key information about the game, which can potentially help to train RL agents
faster. Thus, this paper explores how to interact and converse with NPC agents
to get the key information using large language models (LLMs), as well as
incorporate this information to speed up RL agent's training using knowledge
graphs (KGs) and Story Shaping. | http://arxiv.org/pdf/2307.15833 | Wei Zhou, Xiangyu Peng, Mark Riedl | cs.CL | null | null | cs.CL | 20230728 | 20230728 | [
{
"id": "2301.10107"
},
{
"id": "1812.01628"
},
{
"id": "1903.03094"
},
{
"id": "2006.07409"
},
{
"id": "2010.02903"
},
{
"id": "2001.08837"
}
] |
2307.15337 | 4 | 1
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
3 Normal Question Skeleton-of-Thought 0.4 piablevicuna-13B Decoding Decoding GPT-4 4 Vicuna-13B V1.3 ° 1) Skelet s 1 a rN @ 0.2 ChatGPT 3-5 UitraL 136 c Vicuna-78 V1.3 ° Answer 1. Active listening S Vicuna-338V1.3 * 2. Identify issues Zz Baseline - 3. Compromise g 0.0} + LLaMA2-Chat-7B It (2) Point- LlaMA2-Chat-138 expandin, Gtage = Claude. Openchat-138) I â0.2 Vicuna-7B V1.1 * 6 ¢ 1.0 1.2 1.4 1.6 1.8 jenerates answers enerates answers sequentially > Slower in parallel > Faster Speed-up | 2307.15337#4 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15810 | 4 | Given the increasing interest in developing and deploying LLM-based CAs for mental well-being support, we conducted qualitative research to gain insight into users' experiences with such systems. This inquiry is both timely and critical, as gaining an understanding of healthcare consumers' perspectives on these systems can identify potential limitations and benefits of LLM-based CAs. Ultimately, these insights can enable us to critically reflect on whether LLM-based CAs should be utilized for mental well-being support, and guide future research and design in developing more responsible, user-friendly, and safe LLM-based CAs for mental well-being support.
* : Equal contributions. | 2307.15810#4 | Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support | Conversational agents powered by large language models (LLM) have
increasingly been utilized in the realm of mental well-being support. However,
the implications and outcomes associated with their usage in such a critical
field remain somewhat ambiguous and unexplored. We conducted a qualitative
analysis of 120 posts, encompassing 2917 user comments, drawn from the most
popular subreddit focused on mental health support applications powered by
large language models (u/Replika). This exploration aimed to shed light on the
advantages and potential pitfalls associated with the integration of these
sophisticated models in conversational agents intended for mental health
support. We found the app (Replika) beneficial in offering on-demand,
non-judgmental support, boosting user confidence, and aiding self-discovery.
Yet, it faced challenges in filtering harmful content, sustaining consistent
communication, remembering new information, and mitigating users'
overdependence. The stigma attached further risked isolating users socially. We
strongly assert that future researchers and designers must thoroughly evaluate
the appropriateness of employing LLMs for mental well-being support, ensuring
their responsible and effective application. | http://arxiv.org/pdf/2307.15810 | Zilin Ma, Yiyang Mei, Zhaoyuan Su | cs.HC | null | null | cs.HC | 20230728 | 20230728 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.