doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.14565
73
Positive Instance Generation S| 2 \ Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: elephant heard on rocks X: 73 Y: 80 Width: 418 Height: 418 woman wearing straw hat X: 179 Y: 285 Width: 23 Height: 43 woman wearing long dress X: 176 Y: 298 Width: 35 Height: 83 group of green chairs X: 153 Y: 326 Width: 95 Height: 126 orange bucket on sidewalk X: 80 Y: 334 Width: 50 Height: 60 man wearing white shirt X: 204 Y: 439 Width: 51 Height: 52 baby elephant behind adult elephant X: 244 Y: 235 Width: 119 Height: 155 group of umbrellas on deck X: 82 Y: 72 Width: 136 Height: 83 person wearing yellow shirt X: 202 Y: 270 Width: 35 Height: 46 water is dark green X: 252 Y: 87 Width: 231 Height: 99 a baby elephant X: 297 Y: 317 Width: 60 Height: 69 an orange bucket on the ground X: 91 Y: 341 Width: 38 Height: 36 a group of white umbrellas X: 99 Y: 82 Width: 112
2306.14565#73
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
74
60 Height: 69 an orange bucket on the ground X: 91 Y: 341 Width: 38 Height: 36 a group of white umbrellas X: 99 Y: 82 Width: 112 Height: 28 a group of green chairs X: 149 Y: 364 Width: 94 Height: 86 aman in an orange shirt X: 204 Y: 265 Width: 31 Height: 47 a blue tin awning X: 65 Y: 190 Width: 88 Height: 95 a woman wearing a yellow dress X: 169 Y: 298 Width: 47 Height: 76 a woman wearing a tan hat X: 173 Y: 288 Width: 38 Height: 79 aman sitting down X: 200 Y: 425 Width: 65 Height: 72 aman wearing a white shirt X: 196 Y: 422 Width: 80 Height: 72 a elephant coming out of the water X: 384 Y: 219 Width: 88 Height: 88 aman sitting in a chair X: 200 Y: 428 Width: 79 Height: 67 a orange basket X: 68 Y: 329 Width: 77 Height: 69 blue chairs on a deck X: 150 Y: 325 Width: 98 Height: 142 elephants walking on rocks X: 152 Y: 161 Width: 261 Height: 239 a baby elephant X: 280 Y: 295 Width: 98 Height: 105 white umbrellas outside a building X: 91 Y: 66 Width: 161 Height: 53 a white tiled staircase X:
2306.14565#74
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
74
# .hidden.txt archive.tar.gz dir1 .hidden1.txt a.txt all.txt file.c file.txt hello.c hello.txt long.txt new1.sh readonly.txt script1.sh sum.c terminate.txt # csvfile1.csv foo.txt hello.txt mysql # sql1.sql | | # hello.c hello.php recent.txt textfile7.txt # (b) File System 2 (a) File System 1 Figure 6: File System structures designed for InterCode-Bash. 17 eX o © In(frequency) we os? medium / « ow hard ee / \ a ES ee & easy e at sf medium hard easy Figure 7: Top 30 most frequently occurring bash utilities out of the 66 in InterCode-Bash with their frequencies in log scale. Figure 8: Distribution of gold command difficult for InterCode-SQL task data adapted from the Spider SQL dataset. # A.3 SQL Environment
2306.14898#74
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
75
X: 280 Y: 295 Width: 98 Height: 105 white umbrellas outside a building X: 91 Y: 66 Width: 161 Height: 53 a white tiled staircase X: 47 Y: 367 Width: 109 Height: 126 someone taking a photo X: 12 Y: 3 Width: 75 Height: 73 people standing on a deck X: 104 Y: 166 Width: 153 Height: 165 plastic blue chairs X: 146 Y: 318 Width: 93 Height: 129 a herd of asian elephants X: 83 Y: 110 Width: 409 Height: 308 the elephant is stepping out of the river X: 388 Y: 218 Width: 86 Height: 114 asmall elephant X: 302 Y: 309 Width: 71 Height: 95 this man is dressed in white X: 208 Y: 416 Width: 49 Height: 82 this man is dressed in white X: 208 Y: 416 Width: 49 Height: 82 she is wearing a sun hat and adress X: 175 Y: 287 Width: 64 Height: 112 the people are gathered X: 88 Y: 165 Width: 163 Height: 157 they are watching the elephants X: 87 Y: 167 Width: 165 Height: 179 this person is recording the elephants X: 18 Y: 6 Width: 74 Height: 113 a rapid flowing river X: 349 Y: 73 Width: 143 Height: 243 a herd of a
2306.14565#75
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
75
Figure 8: Distribution of gold command difficult for InterCode-SQL task data adapted from the Spider SQL dataset. # A.3 SQL Environment Environment Definition. The Dockerfile defining the SQL-based environment inherits from the MySQL image and adds a .sql file setup script to the /docker-entrypoint-initdb.d directory within the Docker image; this is a special directory made for container initialization. On container start-up, the added .sql file, which creates and populates databases with tables and tables with records, is automatically invoked. Since the InterCode-SQL dataset does not feature any queries that involve modifying the database in any manner (i.e. no INSERT, UPDATE, or DELETE commands), there is no reset mechanism written into the Dockerfile definition that is invoked before each task episode; with that said, adding a reset script or version control to the Dockerfile is simple.
2306.14898#75
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
76
the elephants X: 18 Y: 6 Width: 74 Height: 113 a rapid flowing river X: 349 Y: 73 Width: 143 Height: 243 a herd of a elephants moving through a town X: 78 Y: 108 Width: 386 Height: 274 the trunk of an elephant X: 395 Y: 261 Width: 15 Height: 21 a blue awning of a building X: 66 Y: 187 Width: 91 Height: 104 blue chairs on a patio X: 152 Y: 324 Width: 76 Height: 120 white umbrellas on a patio X: 104 Y: 84 Width: 113 Height: 30 a trunk of an elephant X: 335 Y: 254 Width: 14 Height: 27 the tail of an elephant X: 308 Y: 233 Width: 16 Height: 32 an orange bucket of on a patio X: 95 Y: 348 Width: 30 Height: 33 the leaves of a tree X: 200 Y: 17 Width: 41 Height: 33 the leaves of a tree X: 361 Y: 36 Width: 43 Height: 23 Task: image captioning, Image Sentiment Analysis, Activity Recognition, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA, Visual Entailment, Visual Commonsense reasoning, Styled Image Caption, Referential Expression Grounding. Come up with 20 diverse instructions for all the
2306.14565#76
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
76
InterCode-SQL dataset. InterCode-SQL is adopted from the development set of the Spider dataset [55]. Spider 1.0 is a large-scale cross-domain dataset on generating SQL queries from natural language questions whose development set contains 1034 pairs of <instruction, gold> task instances spanning 20 databases. The distribution of queries according to their hardness criterion is shown in Figure 8. As discussed in Section 3.3, a filtering criterion narrows down the Spider dataset’s information to only the necessary components. We do not add anything to the Spider dataset that was not originally available. The Spider 1.0 dataset is available for use under the CC BY-SA 4.0 license. MySQL databases. We first resolve data types for primary, foreign key pairs across the provided table schemas in Spider for conflicting instances and generate the corresponding SQLite databases. Next, to align with our Docker-supported environment, we convert the SQLite databases to MySQL format using sqlite3mysql [41], a Python library, and then generate a unified MySQL dump having schemas for all the tables. To handle case-sensitive table name discrepancies between the queries and the underlying schema in the original Spider dataset, we activate the lower_case_table_names setting in our evaluation environment. Additionally, for proper access controls, we create a test user and grant them all privileges for all the tables.
2306.14898#76
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
77
VQA, Visual Entailment, Visual Commonsense reasoning, Styled Image Caption, Referential Expression Grounding. Come up with 20 diverse instructions for all the tasks above with different language styles and accurate answers. The instructions should contain interrogative sentences and declarative sentences. The answers should be less than 30 words. Each task should have less than 3 instructions. Output format: Instruction: instruction example Answer: answer example Task: task sample
2306.14565#77
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
77
Reward function. The completion evaluation mechanism compares the output of the gold SQL latest observation) from the agent’s interaction command with the latest execution output (i.e. trajectory. The execution output of all gold SQL queries is a list of records. Each record is a tuple of one or more values that may be different types. For any single execution output, the order of types for every record is identical. Given the agent command(s)’ latest execution output A and the gold command’s execution output G, we formulate the reward function as follows: R = A ∩ G A ∪ G ∗ (kendalltau((A ∩ (A ∩ G)), (G ∩ (A ∩ G))) + 1)/2 (2) 18 Question OD | Find the first name and age of students who have a [) dog but do not have a cat as a pet. Agent ray SHOW TABLES th ' ' : SQL Env [(‘has_pet',), ('pets',), ('Student',)] os Agent real] DESC has_pet SQL Env [("StuID', 'int', 'VES', 'MUL', None...] w Agent rea SELECT fname, age FROM student WHERE ... | SQL Env [('Tracy', 19)] Agent = rea submit
2306.14898#77
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
78
SQL Env i Agent Command(s) ie 1. SHOW TABLES 2. DESC has_pet, 3. ; SELECT StuID from has_pet 4. DESC : Student (additional cmds Latest Std. Out: [(Ross, 29), (Ross, 29), (Chandler, 28), (Rachel, 27), (Monica, 25)] . Gold Std. Output: [(Ross, 29), (Ross, 29), (Joey, 27), (Rachel, 27), (Phoebe, 26)] Gold Command e SELECT T1.fname, Tl.age FROM Student H AS T1 JOIN Has_Pet AS T2 ON Figure 9: Example of interactions between an agent and the InterCode SQL Environment Figure 10: Evaluation of the results of agent interactions with the SQL Environment against the gold command associated with the task. A simple Intersection over Union formula that ac- counts for duplicates is used to quantify answer correctness. Task completion is a reward of 1.
2306.14898#78
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
79
Output: Instruction: Craft a brief narrative about the baby elephant and adult elephant. Answer: Ababy elephant is depicted behind an adult elephant, possibly seeking protection. Task: Image Captioning Instruction: Write a concise description of the entire scene in this image. Answer: Aherd of elephants, including baby elephants, is seen near a river with people watching and taking photos. There are also green chairs, white umbrellas, and blue awnings nearby. Task: Image Captioning Instruction: What is the overall sentiment conveyed by this image? Answer: The sentiment of the image is predominantly peaceful and joyful, showcasing the harmonious interaction of wildlife with human presence. Task: Image Sentiment Analysis Instruction: How would you interpret the sentiment of the scene involving the man in white shirt? Answer: The man in the white shirt exudes a feeling of calmness and interest as he observes the scene. Task: Image Sentiment Analysis Instruction: Explain how the elephants and the humans interact in this image. Answer: The humans are observing and photographing the elephants, while the elephants seem to be moving freely and peacefully around them. Task: Object Interaction Analysis Instruction: Analyze the interaction between the objects
2306.14565#79
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
79
We employ Intersection over Union (IoU), or more formally the Jaccard Index, to quantify the correctness of the latest execution output generated by the agent against the gold output. If the latest execution output of the SQL query is not in the form of a list of records (i.e. a string error message), the reward is 0 by default. Among the items that lie in the intersection of the agent and gold execution outputs, we also apply a penalty if the records are in the incorrect order. Since achieving the correct order of fields in a record is of non-trivial importance to addressing many SQL queries correctly, we do not do any re-ordering or pre-processing of the list of records. Therefore, a record formatted as ("Ross", 29) is not awarded any credit against a gold output that includes (29, "Ross"). To quantify how sorted the agent output is relative to the gold output, we lean on Kendall’s τ and adjust the output range to [0, 1]. The IoU score is then directly scaled by this coefficient.
2306.14898#79
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
80
the elephants, while the elephants seem to be moving freely and peacefully around them. Task: Object Interaction Analysis Instruction: Analyze the interaction between the objects in the foreground and the background. Answer: The foreground, featuring green chairs and people, provides a contrast to the background where the elephants are moving. Task: Object Interaction Analysis Instruction: Which type of animal is featured prominently in the image? Choices: A) Lions B) Elephants C) Giraffes D) Zebras Answer: B) Elephants Task: Multichoice VOA Instruction: What is the primary color of the umbrellas present in the image? Choices: A) Blue B) Green C) White D) Red Answer: C) White Task: Multichoice VOA Instruction: Based on the image, does it entail that the people are afraid of the elephants? Answer: No, the image does not entail that the people are afraid of the elephants. Task: Visual Entailment Instruction: Based on the image, does it entail that the location is in a city center or on the outskirts? Answer: The image entails that the location is more likely on the outskirts, not in a city center. The presence of a herd of elephants suggests a location
2306.14565#80
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
80
All in all, only a correctly ordered list with the exact set of records found in the gold output would receive a max score of 1, which corresponds to task completion. Figure 10 visualizes the reward function for an example set of outputs. Note that in the main paper, the Success Rate metric is used; the scalar 3/7 output shown in the figure is treated as a 0 when quantifying whether the task was completed via the 0/1 Success Rate metric. As mentioned in the discussion of the Bash reward function, this reward function also aims to be a richer and fairer continuous evaluation metric of a model’s reasoning abilities compared to a binary 0/1 task completion score. # A.4 Python Environment
2306.14898#80
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
81
# A.4 Python Environment Environment definition. The InterCode-Python task environment inherits from a bare-minimum Python 3.9 image that provides the basic essentials for initializing a Python interpreter. We were unable to determine how to initialize a Python interpreter within a Dockerfile such that the container would then be capable of automatically executing Python commands sent to it while continuous logging every action/observation per turn. To overcome this, we create and define a backend application that runs within the Docker container, simulates a Python interpreter, and is responsible for handling input/output. By having the application sit between the agent’s actions and the interpreter, we are able to log every episode faithfully in addition to providing an environment that is agent- friendly and faithful to the experience of a real Python interpreter. 19 Py Interpreter Coy Agent Solution for index, c in enumerate(str1): if str1[:index+1].count(c) > 1: Solution Unit Tests assert first_repi (“abeabe"™ assert first_repeated_char x ("abe") == "None" assert first_repeated_char Gold Command for index,c in enumerate(str1): : return c if stri[:index+1].count(c) > 1 i else :
2306.14898#81
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
82
Figure 11: Example of interactions between an agent and the InterCode Python Environment Figure 12: In this setting, an agent interacts with a Python Interpreter to 1. implement the requested method and 2. write test cases to de- termine function correctness. Upon submission, the reward function then evaluates the agent’s implementation with a set of unit tests. Question D_| write a python function to find the first repeated OD | character in a given string. def first_repeated_char(str1) Agent reg) def first_repeated_char(str1): for index, c in enumerate(str1): if stri[:index+1].count(c) > 1: return c Py Intr. (No Std. Out) oa] Agent first_repeated_char (“aa”) Py Intr. gy | Agent | first_repeated_char (“12312”) Py Intr. «4» * Agent reg) vabete Figure 11: Example of interactions between an agent and the InterCode Python Environment
2306.14898#82
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
83
Output: Instruction: Identify and describe the attributes of the bucket in the image. Answer: The bucket is orange, possibly made of plastic, and appears to be on a sidewalk or patio. Task: Object Attribute Detection Instruction: Assess the color of the baby elephant. Is it gray or brown? Answer: Without visual data, we cannot accurately determine the color of the baby elephant. Task: Object Attribute Detection Instruction: Based on the image, what is the probable reason for the congregation of people? Answer: The probable reason for the congregation of people is to observe the elephants passing through. Task: Visual Commonsense Reasoning Instruction: Why might the baby elephant be close to the adult elephant? Answer: The baby elephant is likely close to the adult elephant for protection and guidance as they move through the area. Task: Visual Commonsense Reasoning Instruction: Locate "the man dressed in white". Answer: The man dressed in white is located at the coordinates (208, 416), with a width of 49 and a height of 82. Task: Referential Expression Grounding Instruction: Identify the location of "the woman wearing a straw hat". Answer: The woman wearing a straw hat is located at X: 179 Y: 285 with
2306.14565#83
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
83
Figure 11: Example of interactions between an agent and the InterCode Python Environment InterCode-Python dataset. A large majority of code datasets popular within the NLP community are based on Python and present code completion as the primary task [9, 4, 20]. In the original problem setting, a task worker is asked to synthesize code in a zero, one, or few-shot setting with little to no access to an execution environment. In the interactive setting, task workers are asked to accomplish the same objective, but informed that they have a Python environment to do whatever may help them write the function correctly, such as prototype different implementations and write/execute their own unit tests. Therefore, datasets such as HumanEval, APPS, and MBPP require little to no revisions to be usable within the InterCode environment, with the only necessary processing for all three being renaming of dataset attributes to InterCode-compatible names. A visualization of an example trajectory of interactions between an agent and the Python interpreter is presented in Figure 11. Reward function. We preserve the original metric of proportion of unit tests passed to evaluate agent implementations, with all tests passing being equivalent to task completion. Complementary to the visualization of interactions, we also show how InterCode-Python performs automatic evaluation of an agent’s implementation of the desired function in Figure 12. # B Experiment Details # B.1 Model Details
2306.14898#83
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
84
Instruction: Identify the location of "the woman wearing a straw hat". Answer: The woman wearing a straw hat is located at X: 179 Y: 285 with Width: 23 and Height: 43. Task: Referential Expression Grounding Instruction: Provide a poetic caption for the image. Answer: Beneath a sky kissed by the sun, a parade of ancient giants meanders, while observers stand still in hushed admiration. Task: Styled Image Caption Instruction: Caption the image in a romantic style. Answer: In a scene gently touched by nature's grandeur, silent stories of love and companionship play out as the elephants pass by. Task: Styled Image Caption Instruction: Identify the primary activity happening in the scene described by the image. Answer: The primary activity in the image is observing and photographing a herd of elephants moving through a town. Task: Activity Recognition Instruction: By analyzing the given information, what could you infer about the main event in the image? Answer: The main event in the image is a herd of elephants moving through a town while being observed and photographed by people. Task: Activity Recognition
2306.14565#84
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
84
# B Experiment Details # B.1 Model Details We do not perform any model training for configuring the methods or running the experiments discussed in this project. Our evaluations use inference call requests to OpenAI, PaLM, and Hugging- Face API endpoints to run the baseline models on the InterCode tasks. For OpenAI models, we set temperature to 0, top_p to 1, max_tokens to 512, and n (number of completions) to 1. For PaLM models, we set temperature to 0, top_p to 1, and candidate_count (number of completions) to 1. For open source models, we set max_new_tokens (maximum number of tokens to generate) to 100 and temperature to 0.01. Due to constraints in the context window size, we limit the length of each observation to a maximum of 1000 tokens across all inference calls. The code for configuring API calls can be found in the linked repository. 20 # B.2 Additional Experiments & Analysis
2306.14898#84
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
85
20 # B.2 Additional Experiments & Analysis SQL schema ablation. To confirm that the benefits of interaction exceed a simple disparity in information between the Single Turn and Try Again settings, we add the full SQL database schema, providing holistic details of tables necessary to the given instruction, to the Question message of both prompts, then re-run the comparison for several. Table 5 indicates that while Single Turn performance improves drastically, a non-trivial difference in favor of Try Again remains. Manual inspection of task episode trajectories shows that selective and fine-grained context discovery (i.e. inspecting specific table records and file content that affect query construction) is still critical to solving tasks efficiently. InterCode-SQL + Schema Single Turn Model / Hardness Easy Med Hard Extra All Try Again (max 10 turns) Easy Med Hard Extra All gpt-3.5-turbo text-bison-001 chat-bison-001 90.7 89.5 79.0 70.2 68.2 52.0 59.2 44.2 32.1 37.3 19.3 15.1 67.9 61.4 49.2 92.7 90.7 82.2 74.9 70.4 56.0 67.2 50.0 42.5 43.4 21.1 24.1 72.8 63.9 54.9
2306.14898#85
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
86
Positive Instance Generation Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: eyebrow on the man X: 79 Y: 77 Width: 36 Height: 13 forehead on the man X: 73 Y: 52 Width: 63 Height: 26 two people playing game X: 2 Y: 1 Wid! 330 Height: 497 hair on the man X: 52 Y: 11 Width: 78 Height: 24 writing on mans shirt X: 0 Y: 366 Width: 91 Height: 47 logo on mans shirt X: 64 Y: 327 Width: 36 Height: 57 wii controller in right hand X: 144 Y: 351 Width: 107 Height: 76 a pair of glasses on the man X: 16 Y: 91 Width: 149 Height: 32 the head of a man playing a game X: 0 Y: 7 Width: 153 Height: 203 upper teeth of the young man X: 113 Y: 154 Width: 26 Height: 6 blurry left hand of the man X: 192 Y: 310 Width: 80 Height: 78 a blue tshirt on the man X: 0 Y: 171 Width: 174 Height: 254 cords of the wii game X: 234 Y: 387 Width: 52
2306.14565#86
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
87
80 Height: 78 a blue tshirt on the man X: 0 Y: 171 Width: 174 Height: 254 cords of the wii game X: 234 Y: 387 Width: 52 Height: 111 a wide set of blinds X: 49 Y: 1 Width: 279 Height: 270 a young man on the sofa X: 197 90 Width: 134 Height: 278 writing and an insignia on a shirt X: 8 Y: 325 Width: 88 Height: 85. a boy wearing glasses X: 34 Y: 37 Width: 132 Height: 118 a boy playign a game X: 49 Y: 32 Width: 137 Height: 133 a white wii remote X: 138 Y: 341 Width: 160 Height: 74 a boy earing a shirt X: 24 Y: 212 Width: 164 Height: 269 glasses on a mans face X: 34 Y: 86 Width: 131 Height: 44 nintendo wii remote control X: 144 Y: 350 Width: 70 Height: 43 nintendo wii gaming nunchuk X: 198 Y: 310 Width: 48 Height: 47 the man is asian X: 11 Y: 20 Width: 247 Height: 430 the man has a controller X: 129 Y: 332 Width: 145 Height: 115 Task: Image Captioning, Image Sentiment Analysis, Activity Recognition, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA, Visual Entailment,
2306.14565#87
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
87
Trends of admissible actions. Table 6 shows that for the SQL task, models generate admissible actions with increasingly higher rates early on; in initial turns, models will tend to hallucinate a query with fabricated table and column names at a high frequency. The drop in error rate between the first and second turns can largely be attributed to the model’s decision to begin exploring context; 60.3% of second turn actions contain either the SHOW TABLES or DESC keywords. Prompting strategies (i.e. ReAct, Plan & Solve), explicit phrasing that encourages exploration, and demonstrations diminish a model’s default tendency to hallucinate a query in the first turn. This trend is not found in Bash. This can likely be attributed to the nature of the instructions; unlike the SQL instructions which simply pose a question and do not have any explicit references to SQL commands or clauses, Bash instructions will typically include keywords that correspond directly to useful Linux commands or give insight into the file system’s internal structure. These signals reduce the need for context discovery. Therefore, successful task completion in Bash tends to lean towards 1) Figuring out which flags, options, and arguments to configure a command with and 2) How to string together commands or pass outputs from one command to the next correctly.
2306.14898#87
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
88
Task: Image Captioning, Image Sentiment Analysis, Activity Recognition, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA, Visual Entailment, Visual Commonsense Reasoning, Styled Image Caption, Referential Expression Grounding. Come up with 20 diverse instructions for all the tasks above with different language styles and accurate answers. The instructions should contain interrogative sentences and declarative sentences. The answers should be less than 30 words. Each task should have less than 3 instructions. Output format: Instruction: instruction example Answer: answer example Task: task sample
2306.14565#88
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
88
For both Bash and SQL, in later turns, the rate of admissible actions does not improve consistently. The actions in these later turns are usually attempts to answer the original instruction. At these stages, a model will tend to make small, cursory adjustments to the prior action based on execution feedback, often resulting in both a repetition of the same types of mistakes and hallucinations that introduce new issues. In these moments, compared to such minor perturbations, alternative reasoning capabilities such as context discovery and modularized problem solving are often more efficient ways to get the relevant insights needed to better decide how to fix the prior turns’ issues. As corroborated by Figure 3, models struggle to take advantage of additional context in longer task episodes or horizons. Making the most of multiple queries is an open challenge with exciting implications for solving more difficult coding tasks. Turn 1 2 3 4 5 6 7 8 9 10 SQL Bash 90.2 23.1 46.4 28.6 34.4 34.7 39.7 37.5 31.1 37.6 42.9 42.9 51.5 39.3 47.4 37.1 48.4 33.7 46.6 38.2 Table 6: Error % (Average ratio of non-admissible actions) per turn for the Try Again prompting scheme using a GPT 3.5 model on the Bash and SQL InterCode datasets.
2306.14898#88
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
89
Table 6: Error % (Average ratio of non-admissible actions) per turn for the Try Again prompting scheme using a GPT 3.5 model on the Bash and SQL InterCode datasets. Robustness results. We conducted an evaluation to assess the robustness of the reported accuracy metrics for the models. In order to maintain consistency in the evaluation, we focused on the performance across file systems 2, 3, and 4 (shown in Figure 6), which were designed to have similar difficulty levels. File system 1, intentionally made harder, was not included in this analysis. The 21 standard errors for the Single Turn and Try Again modes are presented in Table 7. The Try Again mode leverages interaction to consistently outperform the Single Turn mode across all models. Model Single Turn text-davinci-003 gpt-3.5-turbo gpt-4 text-bison-001 chat-bison-001 Vicuna-13B StarChat-16B 31.40 ± 1.35 36.63 ± 1.83 38.37 ± 1.20 18.83 ± 3.57 20.47 ± 1.89 16.73 ± 5.00 19.37 ± 3.04 43.13 ± 5.98 47.40 ± 1.23 52.70 ± 3.50 22.40 ± 3.35 21.67 ± 1.81 27.67 ± 4.15 27.17 ± 2.74
2306.14898#89
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
90
Output: Instruction: Identify the primary activity taking place in the described image. Answer: The primary activity is two people playing a game. Task: Activity Recognition What is the sentiment expressed in the image based on the given details? Answer: The sentiment is joyful and engaged, as depicted by a man playing a game. Task: Image Sentiment Analysis Provide a concise caption for the given image. Answer: An Asian man joyfully playing a Wii game on the sofa, wearing a blue shirt with an insignia. Task: Image Captioning Instruction: Specify the emotion depicted in the image based on the objects Answer: The emotion depicted is happiness, inferred from the activity and facial features of the man. Task: Image Sentiment Analysis Instruction: Based on the details given, state which object is interacting with the man. Answer: The man is interacting with the Wii controller. Task: Object Interaction Analysis Instruction: Identify the unique attribute associated with the man in the given image. Answer: The unique attribute is that the man is Asian and wearing glasses. Task: Object Attribute Detection Instruction: From the listed objects, which one is closest to the man's face? Answer: The glasses are closest to the man's face. Task: Referential Expression Grounding Instruction: Is the statement
2306.14565#90
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
90
Table 7: (Robustness Results) Success Rate with standard errors for single vs. multi turn evaluation on InterCode-Bash (refer §A.2). Best metrics are in bold. Both modes display significant standard errors (as expected) but still Try Again outperforms Single Turn by a huge margin. # B.3 Additional Prompting Strategy To gauge the significance of designing prompting strategies that can successfully solve the interactive coding task, we attempt to devise a more performant approach by chaining together existing tech- niques, where each technique is meant to elicit a different, relevant reasoning skill. To this end, we design a hybrid prompting strategy that combines Plan & Solve and Try Again, which we refer to as "Plan & Solve + Refine". This strategy is meant to complement a model’s planning, modularized task completion, and context discovery abilities with error correction. Figure 13 visualizes this prompting strategy’s workflow. The full prompting template is included in § B.7.
2306.14898#90
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
91
one is closest to the man's face? Answer: The glasses are closest to the man's face. Task: Referential Expression Grounding Instruction: Is the statement "The man is playing a video game" true or false based on the image details? Answer: True Task: Visual Entailment Instruction: Based on the information, is it reasonable to conclude that the man is indoors? Answer: Yes, it's reasonable, due to the presence of a sofa and a set of blinds. Task: Visual Commonsense Reasoning Instruction: Write a caption for the image with a humorous style. Answer: "Multitasking man: mastering Wii, sporting stylish glasses, and showing off that shirt insignia all at once!" Task: Styled Image Caption
2306.14565#91
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
91
Plan & Solve + Refine Initial Msg. ‘Agent ‘Agent @ [Llet's understand the problem and devise <action> rea) Up to <action> @ u . pton plan... Then, let's carry out the plan... plan P e Observation Observation times Question cp [xts_eutour, <ster> length Wee Lectss_eutput, reware> Vv (ap Liitatis the country code and frst name. v ‘Agent Terminate if reward = 1, else continue Terminate when reward =1 <plan> Vv or exceeds n turns Post Plan Msg. You have finished executing the plan, but it seems like there are still issues with. Execute Msg. @ You will now execute your own plan. @ Interact with a MySQL DB using SQL... Figure 13: Visualization of the Plan & Solve + Refine strategy. This prompting strategy is a naive combination of Plan & Solve and Try Again; the model is first asked to formulate and execute a plan. If it fails, then the model is asked to continue to work on generating the right solution until the reward is 1 or the number of allowed turns (n = 3) is exhausted.
2306.14898#91
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
92
We run small-scale experiments in the same fashion as discussed in Section 5.2 on the SQL task for 44 randomly selected task instances. These preliminary tests have an average Success Rate of 45.5, a Number of Turns of 6.34 (across both planning and refinement stages), and an Error % of 22.4. Compared to the original Plan & Solve performance presented in Section 5.2, this strategy does not yield a significant improvement. Upon inspection of the trajectories, it is evident that while models are capable of specific reasoning techniques, it is not enough to elicit specific strategies in a pre-determined order. In addition, compared to human problem solvers, such reasoning frameworks do not intuitively allow for adjustment in strategy based on execution output. Therefore, mechanisms that allow a model to freely self-determine what strategies at any turn in task episode are integral to improving Success Rate and non-trivial because simple adjustments in existing prompting strategies are not enough to close this gap. # B.4 Task Trajectory Analyses In this section, we discuss some of the behaviors and phenomena in different prompting schemes. We also include annotated trajectories that are representative of these observations. 22
2306.14898#92
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
93
Output: Instruction: Determine the color of the man's shirt in the image. Answer: The man's shirt is blue. Task: Object Attribute Detection Instruction: Based on the details, can we infer that the game being played is a Nintendo Wii game? Answer: Yes, we can infer that the game being played is a Nintendo Wii game due to the presence of a Wii controller and nunchuk. Task: Visual Commonsense Reasoning Instruction: Describe the location of the man in relation to the set of blinds. Answer: The man is positioned in front of the set of blinds. Task: Referential Expression Grounding Instruction: Given the data, could the statement "The boy is playing a game with a PS4 controller" be true? Answer: No, it's false because the controller specified is a Wii controller, not a PS4 controller. Task: Visual Entailment Identify any identifiable attributes of the man in the given image. Answer: The man is young, Asian, wears glasses, has a shirt with a logo and writing, and is using a Wii controller. Task: Object Attribute Detection Instruction: Identify the object the man is holding in his right hand based on the image. A) PS4 controller B) Wii controller C) Mobile phone D) Answer: B) Wii controller Task: Muli-choice
2306.14565#93
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
93
Inst: Count md5sum of all ’*.py’ files in /testbed folder with subfolders. Gold: find /testbed -type f -name "*.py " -exec md5sum {} + | awk ’{print $1}’ | sort | md5sum Out: ff48e395ab4e0acb8336c4afe4504 - Act: find /testbed -name "*.py" -type f -exec md5sum {} + | wc -l Obs: 4 Act: find /testbed -name "*.py" -type f -exec md5sum {} + | awk ’END{print NR}’ Obs: 4 Act: find /testbed -name "*.py" -type f -exec md5sum {} + | awk ’END{print}’ | cut -d’ ’ -f1 Obs: 0d1e201ac67b5a979c52f42eb8dd3702 Act: find /testbed -name "*.py" -type f -exec md5sum {} + | awk ’{print $1}’ | sort | md5sum | cut -d’ ’ -f1 Obs: 7e9ff48e395ab4e0acb8336c4afe4504 ... (6 more
2306.14898#93
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
94
is holding in his right hand based on the image. A) PS4 controller B) Wii controller C) Mobile phone D) Answer: B) Wii controller Task: Muli-choice VQA Instruction: Determine the color of the man's shirt from the image description. A) Red B) Black C) Blue D) Green Answer: C) Blue Task: Muli-choice VQA Instruction: Given the data, state the activity that the man and the boy are participating in. Answer: The man and the boy are participating in gaming. Task: Activity Recognition Instruction: Using an old-fashioned style of speech, describe the image portrayed.. Answer: Behold, a youthful gent of Asian descent, donned in spectacles and a blue tunic, merrily engrossed in a game of the Nintendo Wii. Task: Styled Image Caption Instruction: Based on the image, create a vivid description of the image. Answer: An engaged Asian man with glasses and a logo-clad blue shirt, situated on a sofa, immerses himself in an exhilarating Wii game. Task: Image Captioning
2306.14565#94
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
94
md5sum | cut -d’ ’ -f1 Obs: 7e9ff48e395ab4e0acb8336c4afe4504 ... (6 more turns) Inst: For each continent, list its id, name, and how many countries it has? Gold: , count(*) FROM CONTINENTS AS T1 JOIN COUNTRIES AS T2 ON T1.ContId = T2.Continent GROUP BY T1.ContId; SELECT T1.ContId , T1.Continent Act: SELECT continent.id, continent.na- me, COUNT(country.id) AS num_countries FROM continent LEFT JOIN country ON continent.id = country.continent_id GROUP BY continent.id, continent.name Obs: Error: Table ’car_1.continent’ doesn’t exist Act: SHOW TABLES Obs: [(’car_makers’,), (’car_names’,)] + 4 more... Act: DESC continents Obs: ’int’, [(’ContId’, ’auto_increment’), (’Continent’, ’text’, ’YES’... Act: SELECT ContId, Continent, COUNT(*) AS
2306.14898#94
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
96
Negative Instance Generation (Nonexistent Element Manipulation) Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: there is light blue sky in the distance X: 366 Y: 15 Width: 17 Height: 42 there is a white surfboard here X: 201 Y: 113 Width: 43 Height: 100 there are black swim trunks here X: 276 Y: 146 Width: 34 Height: 67 this man has long hair here X: 337 Y: 112 Width: 20 Height: 37 there is some wild water here X: 58 Y: 63 Width: 51 Height: 96 jackson mingus took this photo X: 173 Y: 31 Width: 165 Height: 174 this man has very hairy legs X: 261 Y: 137 Width: 15 Height: 26 there is a wonderful amount of coloring X: 176 Y: 53 Width: 122 Height: 167 this photo will soon be published in a magazine X: 161 Y: 27 Width: 64 Height: 120 this photo is a really artistic example X: 179 Y: 34 Width: 113 Height: 162 black and white surf board X: 190 Y: 85 Width: 106 Height: 150 black cotton swim
2306.14565#96
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
96
Table 8: Examples of Try Again interaction trajectories with GPT 3.5 based model from Bash (left) and SQL (right). Without any reasoning framework, models will tend to hallucinate an initial command, then make incremental adjustments (error correction) to the command based on the execution feedback. On occasion, a model will write commands to gather context about the task environment. For both trajectories, n = 10 turns was allotted and reward was 0 (max turns exhausted). Default model behaviors. In both the Single Turn and Try Again setups, the templates describe the interactive coding task and do not provide the model with any reasoning framework or strategy, providing insight into a model’s default approach when given the instruction and task environment. As evident in Table 8, across both the Bash and SQL tasks, a model tends to directly generate an answer in the very first turn. For Try Again, the model will then perform error correction, then make slight adjustments to the answer to resolve the issue(s) reflected in prior execution output across the next several turns. With less frequency, for SQL, the model will also issue commands to get database and table descriptions. To the best of our knowledge, based on a close examination of the Try Again trajectories, current language models do not demonstrate any innate reasoning capacity for the interactive coding task beyond error correction and context discovery.
2306.14898#96
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
97
a really artistic example X: 179 Y: 34 Width: 113 Height: 162 black and white surf board X: 190 Y: 85 Width: 106 Height: 150 black cotton swim trunks X: 268 Y: 149 Width: 49 Height: 59 water splashed in air X: 51 Y: 65 Width: 174 Height: 137 black decal on surf board X: 221 Y: 153 Width: 57 Height: 75 man standing on surf board X: 248 Y: 92 Width: 120 Height: 135 man surfing in ocean X: 188 Y: 77 Width: horizon where ocean meets the sky X: 250 Y: 42 Width: 190 Height: 54 calm blue ocean water X: 412 Y: 77 Width: 73 Height: 41 white surfboard mostly out of the water X: 194 Y: 70 Width: 95 Height: 165 ocean wave around surfboard X: 341 Y: 178 Width: 152 Height: 50 wave breaking on shore X: 2 Y: 177 Width: 494 Height: 57 white foam in water X: 4 Y: 203 Width: 184 Height: 32 black piece on surf board X: 190 Y: 96 Width: 22 Height: 19 man riding a surf board X: 189 Y: 82 Width: 180 Height: 147 man on surfboard X: 197 Y: 57 Width: 206 Height: 163 white ocean spray from surfer X: 62
2306.14565#97
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
97
Prompting strategy effects. In contrast with Try Again, the ReAct prompting strategy briefly introduces the interactive task environment and proposes a reasoning framework for the model to abide by. Table 9 presents a side-by-side comparison of the Try Again and ReAct [51] strategies. The figure reflects the richer types of problem-solving that a model exhibits when prompted to reason on its own thoughts about the execution output. This reflects that through better strategies, it may be possible to make significant advancements in the interactive coding task with prompting strategies that attempt to elicit reasoning via an appropriate framework that also permits the model to be expressive and creative in devising its own solutions. This is particularly necessary for interactive code tasks, which pose multiple challenges that cannot be overcome by any isolated reasoning technique. As demonstrated in § B.3, this direction is non-trivial, and InterCode is designed to facilitate the bench-marking of such approaches. # B.5 Capture the Flag Analysis
2306.14898#97
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
98
X: 189 Y: 82 Width: 180 Height: 147 man on surfboard X: 197 Y: 57 Width: 206 Height: 163 white ocean spray from surfer X: 62 Y: 64 Width: 122 Height: 166 three blue fins on surfboard X: 185 Y: 88 Width: 51 Height: 60 man with arms up to balance X: 212 Y: 69 Width: 162 Height: 157 photograph taken in the ocean X: 62 Y: 12 Width: 430 Height: 218 design on bottom of surfboard X: 210 Y: 149 Width: 62 Height: 67 lling off surfboard X: 26 Y: 12 Width: 470 Height: 223 Come up with 6 misleading instructions with nonexistent elements (nonexistent objects, nonexistent activities, nonexistent attributes, nonexistent interactions) in the images with different language styles. The instructions should contain interrogative and declarative sentences. Please also explain the reason. Output format: Instruction: instruction example Reason: Reason example
2306.14565#98
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
98
# B.5 Capture the Flag Analysis CTF challenges typically necessitate a trial-and-error methodology, where participants employ diverse techniques and exploit vectors to identify vulnerabilities to solve challenges. Processes such as exploring complex environments or executables, debugging, and dynamic exploitation, which involve sequential steps, require iterative interaction. Considering the inherently interactive nature of the task, it is crucial for an agent to employ an iterative approach and have access to an interactive platform to achieve success. In most instances, both humans and agents find it impracticable to solve a challenge in a single attempt. 23 Inst: What is TV Channel of TV series with Episode "A Love of a Lifetime"? List TV Channel’s series name. Gold: SELECT T1.series_name FROM TV_Channel AS T1 JOIN TV_series AS T2 ON T2.Channel = T1.id WHERE T2.Episode = "A Love of a Lifetime";
2306.14898#98
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
99
Try Again (r = 0, length = 10, max turns = 10) SELECT t1.channel, t2.series_name Act: FROM table1 t1 JOIN table2 t2 ON t1.series_id = t2.series_id WHERE t2.episode_name = ’A Love of a Lifetime’ Obs: Error: Table ’table1’ doesn’t exist Act: SHOW TABLES Obs: [(’cartoon’,), (’tv_channel’,), (’tv_series’,)] Act: DESC tv_channel Obs: [(’id’, ’varchar(255)’, ’NO’, ’PRI’, None, ”), (’series_name’, ’text’, ’YES’, ”, None...] + 7 more... Act: DESC tv_series Obs: (’Episode’, ’text’, ’YES’, ” + 7 more... Act: SELECT Channel,series_name FROM tv _channel WHERE id IN (SELECT Channel FROM tv_series WHERE Episode = ’A Love of a Lifetime’) Obs: Error: Unknown column ’Channel’ Act: FROM tv_channel
2306.14898#99
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
100
Output: Instruction: Could you please zoom in on the red bird sitting on the surfboard? Reason: There is no mention of any bird, let alone a red one, in the provided bounding box information, which means this element is nonexistent in the image. Instruction: Find the green umbrella on the beach and note its position. Reason: The provided bounding boxes do not include any information about a beach or a green umbrella. This makes the object nonexistent within the context of the image. Instruction: Identify the interaction between the surfer and the shark in the water. Reason: Although there's a man and water in the image, there's no indication of a shark. Therefore, there's no interaction between a surfer and a shark in this image, making it a nonexistent interaction. Instruction: Can you locate the cluster of palm trees in the background? Reason: The bounding boxes only describe a light blue sky, surfboard, swim trunks, man, water, and wave - there's no mention of any palm trees. Therefore, palm trees are a nonexistent element in this image. Instruction: Look for the sunset reflecting off the ocean waves. Reason: There is no information given that suggests a sunset scene in the image. The sky is
2306.14565#100
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
100
Channel FROM tv_series WHERE Episode = ’A Love of a Lifetime’) Obs: Error: Unknown column ’Channel’ Act: FROM tv_channel JOIN tv_series ON tv_channel.id = tv_series. WHERE Episode = ’A Love of a Lifetime’ Obs: Error: Unknown column ’channel_name’ Act: FROM tv_channel JOIN tv_series ON tv_channel.id = tv_series.Channel WHERE Episode = ’A Love of a Lifetime’ Continued hallucinations in problem solving actions for last 4 turns... [(’id’, ’double’, ’NO’, ’PRI’, None, ”), SELECT channel_name,series_name Channel SELECT Channel,series_name Thought: I should check what tables are available in this MySQL database that can help me answer this question Act: SHOW TABLES Obs: [(’cartoon’,), (’tv_channel’,), (’tv_series’,)] ...(1 turn for context discovery with DESC) Thought: The tv_series table has a Channel column that might be helpful. I should check if there is a row with the episode name "A Love of a Lifetime". SELECT Channel
2306.14898#100
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
101
this image. Instruction: Look for the sunset reflecting off the ocean waves. Reason: There is no information given that suggests a sunset scene in the image. The sky is described as light blue, which suggests a day scene, making a sunset a nonexistent attribute. Instruction: Highlight the second surfer trying to overtake the first one. Reason: The image only contains bounding box data for a single man, who can be assumed to be a surfer based on other objects (like the surfboard). The second surfer is therefore a nonexistent object.
2306.14565#101
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
101
The tv_series table has a Channel column that might be helpful. I should check if there is a row with the episode name "A Love of a Lifetime". SELECT Channel FROM tv_series Act: WHERE Episode = ’A Love of a Lifetime’ Obs: [(’700’,)] Thought: I get a single value, 700, which is the TV Channel of the TV series with the episode "A Love of a Lifetime". Check what series is on this channel. Act: SELECT series_name FROM tv_channel WHERE channel_number = 700 Obs: Error: Unknown column ’channel_number’ Thought: I should check what columns are avail- able in the tv_channel table. Act: DESC tv_channel Obs: [(’id’, ’varchar(255)’, ’NO’, ’PRI’, None, ”), (’series_name’, ’text’, ’YES’, ”... + 7 more... Thought: The tv_channel table does not have a channel_number column. Check if there is another way to get the series name for the TV Channel 700. Act: SELECT series_name FROM tv_channel WHERE id = ’700’ Obs: [(’Sky
2306.14898#101
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
103
Negative Instance Generation (Existent Element Manipulation) Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: there is light blue sky in the distance X: 366 Y: 15 Width: 17 Height: 42 there is a white surfboard here X: 201 Y: 113 Width: 43 Height: 100 there are black swim trunks here X: 276 Y: 146 Width: 34 Height: 67 this man has long hair here X: 337 Y: 112 Width: 20 Height: 37 there is some wild water here X: 58 Y: 63 Width: 51 Height: 96 jackson mingus took this photo X: 173 Y: 31 Width: 165 Height: 174 this man has very hairy legs X: 261 Y: 137 Width: 15 Height: 26 there is a wonderful amount of coloring X: 176 Y: 53 Width: 122 Height: 167 this photo will soon be published in a magazine X: 161 Y: 27 Width: 64 Height: 120 this photo is a really artistic example X: 179 Y: 34 Width: 113 Height: 162 black and white surf board X: 190 Y: 85 Width: 106 Height: 150 black cotton swim
2306.14565#103
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
104
Here, we present a more thorough discussion of Figure 4. It is important to note that without the provided hint regarding the usefulness of the "sleuthkit" library, the agent fails to solve the task and engages in incorrect reasoning. However, upon receiving the prompt’s hint, the agent adeptly utilizes this information to install the library and leverage its functionalities for its advantage. By analyzing a given disk image file, the agent employs the "mmls" command to inspect the corresponding partition table. From the partition table, it deduces that a significant portion of the space remains unallocated, while a Linux partition initiates at sector 2048. Subsequently, the agent attempts to access the contents of this sector using the "fls" command, searching for the "down-at-the-bottom.txt" file, which it anticipates will contain the flag. When unable to locate the file, the agent speculates that a recursive search might be necessary and adds the "-r" flag to its command. Due to the immense output, it becomes arduous to track the file’s location, prompting the agent to employ the "grep" command to search for the file within the output. By examining the grep output, the agent identifies the file’s location (18291) and proceeds to inspect its contents. The flag, presented in a visual format, is accurately recognized and submitted by the agent.
2306.14898#104
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
105
X: 189 Y: 82 Width: 180 Height: 147 man on surfboard X: 197 Y: 57 Width: 206 Height: 163 white ocean spray from surfer X: 62 Y: 64 Width: 122 Height: 166 three blue fins on surfboard X: 185 Y: 88 Width: 51 Height: 60 man with arms up to balance X: 212 Y: 69 Width: 162 Height: 157 photograph taken in the ocean X: 62 Y: 12 Width: 430 Height: 218 design on bottom of surfboard X: 210 Y: 149 Width: 62 Height: 67 man falling off surfboard X: 26 Y: 12 Width: 470 Height: 223 Come up with 6 misleading instructions with existing objects with wrong attributes in the images with different language styles. The instructions should contain interrogative and declarative sentences. Please also explain the reason. Output format: Instruction: instruction example Reason: Reason example
2306.14565#105
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
105
A human expert employs a very similar approach when provided with the hint. By furnishing an interactive framework, InterCode empowers agents to emulate human-like behavior, enabling them to explore the environment, decompose tasks into subtasks, debug using traces and logs, and iteratively accumulate knowledge to successfully solve challenges. 24 # B.6 Human Performance Baseline To explore the gap between human and agent performance on the interactive coding task, we the authors, all proficient in SQL, act as human task workers and perform the task on a random sample of 15 InterCode-SQL task instances within the same task environment identical to the agent’s setting. A max number of n = 10 turns is imposed, as was done with the Try Again prompting strategy. Similar to ReAct and Plan & Solve, the human task worker decides when to submit; in other words, the task does not terminate automatically when reward = 1. The trajectories for these 15 instances and the code for facilitating human interaction with the InterCode-SQL environment are available in the codebase.
2306.14898#105
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
106
The human task worker was able to complete 13 of 15 tasks (Success Rate = 0.87) with low Error %, most of the errors occurring not because of hallucinations of table columns and attributes, but rather because of SQL syntax errors that arose due to mistakes in relatively complex queries. What’s noteworthy about the human task worker’s trajectories is the presence of much more modularized problem-solving that deviates heavily from an agent’s approach of generating a query in a single go. Even with context discovery and error correction, an agent’s action to produce an answer for the instruction will tend to be a single, self-contained command that generates the answer in one go. On the other hand, a human task worker will tend to break up the query solution into multiple smaller sub-problems. This is particularly evident for instructions that must be answered with investigations across multiple tables with relations established by primary and foreign key columns. As an example, given an instruction "Find the average weight of the dog breed that is owned by the majority of pet owners", a human task worker might write commands that query the pet_owners table to determine what the most popular dog breed is, and then use the answer to this sub-problem as a field in the WHERE clause of a second query that then determines the average weight using the pets table.
2306.14898#106
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
107
Output Instruction: Did you spot the surfer's pink board riding the wave? Reason: The surfboard in the image is described as white, not pink. This misleading instruction changes the color attribute of the surfboard. Instruction: Look closely, isn't that a calm lake in the distance? Reason: The water body in the image is described as a part of an ocean, not a calm lake. This misleading instruction alters the context and setting of the photo. Instruction: The man in the picture is sporting a buzz cut, isn't he? Reason: The man in the image is described as having long hair, not a buzz cut. This misleading instruction misrepresents the attribute related to the man's hairstyle. Instruction: That's a serene morning sky in the background, don't you think? Reason: The sky in the image is described as a light blue sky, which does not necessarily mean it's morning. This misleading instruction gives an incorrect time context. Instruction: Isn't that surfboard fully immersed in the water? Reason: The surfboard is described as being mostly out of the water, not fully immersed. This misleading instruction changes the positioning of the surfboard relative to the water. Instruction: The man on the surfboard is performing a complex
2306.14565#107
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
107
A more thorough and variegated study would be required to fully establish the performance gap between humans and agents. Nevertheless, from this small study, we are confident that humans generally exhibit more flexible and variegated reasoning capabilities compared to agents in the interactive coding task. Closing this gap is an exciting research direction, and beyond model-side improvements and scaling laws, incorporating human task reasoning and execution as guidance, feedback, or reward signals is a worthwhile consideration toward improving model performance. # B.7 Prompt Templates As discussed in the paper, the main baseline evaluations for InterCode consist of presenting a language agent with an instruction and a prompting strategy that have been adapted for InterCode’s interactive task setting. Each prompting strategy is defined as a template with three components: • Initial Message: This is the first message presented to the agent. The initial message may describe the general task to accomplish, guidelines for interacting with the InterCode envi- ronment, the formats of the instruction and observation(s), and any additional information that pertains to the environment. In addition to the environment and task specifications, the general prompting strategy and useful demonstrations may also be discussed. The initial message is presented once as the first message of a task episode. • Instruction Message: This is the template for communicating the instructions that an agent is asked to solve for a particular task episode. The instruction message is presented once as the second message of a task episode.
2306.14898#107
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
108
water, not fully immersed. This misleading instruction changes the positioning of the surfboard relative to the water. Instruction: The man on the surfboard is performing a complex trick, isn't he? Reason: The man on the surfboard is described as having his arms up to balance, which does not imply he is performing a complex trick. This misleading instruction misrepresents the man's action on the surfboard.
2306.14565#108
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
108
• Instruction Message: This is the template for communicating the instructions that an agent is asked to solve for a particular task episode. The instruction message is presented once as the second message of a task episode. • Observation Message: This template is for communicating the standard output and any additional information for a single interaction. This observation is what the agent will use to generate the next action. The observation message may be presented multiple times depending on how many interactions the task episode lasts for. Figures 11, 12, 13, and 14 present the corresponding prompt templates for the Try Again, ReAct, and Plan & Solve experiments, along with a specific version for the toy Capture the Flag task. # B.8 Supported Datasets While evaluation for Bash and SQL are carried out on the NL2Bash and Spider datasets, InterCode supports multiple existing datasets based on these two languages and Python. We include Table 10 to list all datasets currently supported by each InterCode environment. Specific details regarding the 25 transformation procedure and usage guidelines for each dataset can be found in the main InterCode code repository. InterCode Environment Supported Datasets IC-Bash IC-Python IC-SQL NL2Bash [32] MBPP [4], APPS [20] Spider [55], BIRD-SQL [27], WikiSQL [62] Table 10: Summary of all datasets supported by each InterCode environment. # C Future Work Discussion
2306.14898#108
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
109
Table 10: Summary of all datasets supported by each InterCode environment. # C Future Work Discussion In this section, we present some details of ongoing work to expand InterCode’s coverage to more language, datasets, and tasks. Compiled language support. Unlike interactive mode languages where expressions can be executed REPL-style one line at a time, imperative and interpreted languages that are typically processed by compilers (i.e. C, C++, Java, Go, Rust) are not as malleable to the exact form of the Bash or SQL environment. To this end, we see two viable avenues of support for such languages: • 3rd party interpreter support: Following Python, a language with both interpreter and compiler support, tools such as JShell (for Java) or Yaegi (for Go) may be serviceable interpreters for enabling REPL style code interaction for such languages. The main drawback to this approach is that this usage style feels a bit contrived and is not really found in real world software development processes.
2306.14898#109
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
110
GPT4-Assisted Visual Instruction Evaluation Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: food is in a tray X:20 Y:55 Width:470 Height:470 the tray is white X:18 Y:56 Width:471 Height:471 some pieces of chicken X:85 Y:149 Width:142 Height:142 pile of white rice X:218 Y:112 Width:196 Height:196 the carrot is orange X:177 Y:116 Width:67 Height:67 a piece of broccoli X:83 Y:130 Width:52 Height:52 the spoon is white X:0 Y:7 Width:69 Height:69 spoon and napkin in plastic wrapper X:0 Y:0 Width:135 Height:135 table is beige colored X:0 Y:17 Width:498 Height:498 sauce on the tray X:382 Y:241 Width:72 Height:72 a plastic spoon in a wrapper X:1 Y:4 Width:70 Height:70 a beige tray X:0 Y:17 Width:499 Height:499 a serving of white rice X:220
2306.14565#110
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
110
• Multi-language environments: By creating an InterCode-Bash based environment with a language’s corresponding compiler installed (i.e. javac, gcc), an agent would be able to use Bash commands to create, write to, and execute compiled-language files. (i.e. touch hello.java; echo [cmd] > hello.java; javac hello.java; java hello). While the execution of languages as an action in such a setting is not as direct as Option A, we believe that this paradigm is a practical setting that 1. Mirrors real world software engineering and 2. fits naturally with the interactive coding task formulation presented by InterCode. As a side note, Bash, Python, and SQL were the initial two languages chosen due to the bounty of such datasets that are already available. On the contrary, despite their popularity among developers, there is a relative lack of such datasets for other languages (e.g., Java, C++, JavaScript) in the LLM2Code or NL2Code spaces. By 1. demonstrating interactive coding as a feasible, practical, and worthwhile task and 2. designing a language agnostic framework for task construction, we hope InterCode might encourage more exploration into coding tasks that leverage interaction with 1+ programming languages that are not as popular at the moment.
2306.14898#110
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
111
in a wrapper X:1 Y:4 Width:70 Height:70 a beige tray X:0 Y:17 Width:499 Height:499 a serving of white rice X:220 Y:115 Width:194 Height:194 beef cubes with brown sauce X:86 Y:151 Width:140 Height:140 brown sauce on the side of a white container X:363 Y:228 Width:102 Height:102 a baby carrot X:173 Y:115 Width:70 Height:70 bots of cauliflower between two carrots X:138 Y:95 Width:76 Height:76 a bit of broccoli X:82 Y:127 Width:51 Height:51 rice beef and veggie in a plastic container X:83 Y:96 Width:332 Height:332 a white plastic container X:19 Y:57 Wid! '2 Height:472 circle of rice in a plate X:260 Y:119 Width:135 Height:135 cut up pieces of meat X:93 Y:173 Width:45 Height:45 ‘small part of broccoli next to meat X:88 Y:130 Width:54 Height:54 small part of cut up carrot X:98 Y:98 Width:73 Height:73 meat sauce on the side of plate X:428 Y:228 Width:35 Height:35 cut up cauliflower in the corner X:170
2306.14565#111
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
111
Beyond code generation. It has been increasingly evident in recent years that many interactive tasks can be readily converted to Python-based code interaction problems, such as Python API interactions with a search engine to perform question answering or navigate websites for shopping [52], code as interactive control policies for robots [30], or code as a vehicle of thought for accomplishing complex, multi-step math problems [10]. As code has become the medium of communication for many non-code synthesis tasks, we look forward to demonstrating and supporting InterCode’s use as an environment for similar future tasks that extend into domains such as robotics, software engineering, and natural sciences. 26 # Initial Message ## TASK DESCRIPTION You are a {self.language} code generator helping a user answer a question using {self.language}. The user will ask you a question, and your task is to interact with a {self.setting} system using {self.language} commands to come up with the answer. ## RULES 1. 2. Do NOT ask questions Your response should only be {self.language} commands 2. Your response should only be {self.language} commands ## RESPONSE FORMAT Your response should be a {self.language} command. command as follows: # Format your {self.language} Your response should be a {self.language} command. Format your {self.language} command as follows:
2306.14898#111
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
112
Width:73 Height:73 meat sauce on the side of plate X:428 Y:228 Width:35 Height:35 cut up cauliflower in the corner X:170 Y:98 Width:49 Height:49 ‘small part of plastic spoon in the corner X:1 Y:2 Width:75 Height:75 tan folding table holding food X:243 Y:14 Width:182 Height:182 small piece of napkin wrapped in plastic X:67 Y:8 Width:84 Height:84 silver part of table attached to a table X:396 Y:1 Width:72 Height:72 portion of cooked white rice X:213 Y:114 Width:200 Height:200 kalua pork X:88 Y:150 Width:140 Height:140 mixed cooked vegetables X:85 Y:95 Width:162 Height:162 to go container with meat rick and vegetables X:21 Y:56 Width:469 Height:469 meal in bottom half of to go container X:21 Y:55 Width:469 Height:469 cooked carrot piece X:177 Y:117 Width:70 Height:70 cooked cauliflower piece X:164 Y:95 Width:54 Height:54 piece of cooked chopped broccoli X:84 Y:126 Width:49 Height:49 cooked meat with skin on X:82 Y:153 Width:149 Height:149
2306.14565#112
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
112
# Format your {self.language} Your response should be a {self.language} command. Format your {self.language} command as follows: # ```{self.language} Your {self.language} code here ``` Write {self.language} commands to help you do two things: 1. you are interacting with a MySQL database, you can use the DESCRIBE command to learn more about the tables you have access to. 2. output. Learn more about the {self.setting} you are interacting with. # For example, if Execute {self.language} commands based on these tables to generate the correct # DO NOT WRITE ANYTHING EXCEPT FOR CODE in your response. ## OUTPUT DESCRIPTION Given your {self.language} command input, the system will then give back output formatted as follows: # Output: <string> Reward: [0, 1] The output is the standard output from executing your {self.language} command. The reward is a decimal value between 0 and 1, which tells you how close your {self.language} command is to the correct answer. the closer your {self.language} command is to the correct answer. # Instruction Message Query: "<Task Episode Instruction>" Observation Message Output: <Observation from Intercode Environment> Reward: <Value ∈ [0, 1]>
2306.14898#112
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
113
Height:54 piece of cooked chopped broccoli X:84 Y:126 Width:49 Height:49 cooked meat with skin on X:82 Y:153 Width:149 Height:149 plastic to go spoon and napkin in plastic wrap X:3 Y:1 Width:132 Height:132 serving of white rice X:230 Y:131 Width:187 Height:187 mixed veggies in container X:99 Y:104 Width:145 Height:145 chicken with sauce in container X:89 Y:162 Width:132 Height:132 white container with food X:69 Y:65 Width:428 Height:428 plastic spoon in wrap X:3 Y:3 Width:98 Height:98 beige tray with food on it X:1 Y:28 Width:487 Height:487 sauce inside white container X:57 Y:252 Width:67 Height:67 baby carrot beside rice X:167 Y:117 Width:78 Height:78 cauliflower beside baby carrot X:145 Y:105 Width:67 Height:67 crispy chicken with glaze X:91 Y:187 Width:130 Height:130
2306.14565#113
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
113
# Instruction Message Query: "<Task Episode Instruction>" Observation Message Output: <Observation from Intercode Environment> Reward: <Value ∈ [0, 1]> Table 11: Initial, Query, and Retry Prompts for the Try Again Strategy Evaluation. {self.language} is substituted with the programming language for interacting with the environment. {self.setting} is substituted with a natural language description of the environment 27 Initial Message
2306.14898#113
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
114
Figure 21: The first example for the GPT4-Assisted Visual Instruction Evaluation (Part1). The image is not included in the text prompt. 29 # GPT4-Assisted Visual Instruction Evaluation Here is the instruction for the image: Don't forget to notice the red wine glass beside the white container. Answer: The instruction is misleading because there are no mentions of a red wine glass in the image. The declarative sentence in this instruction suggests that the reader should pay attention to a non-existent object. Answer2: Yes, | see the red wine glass and the white container.</s> Answer3: Based on the image, the meal consists of a combination of meat and vegetables, including rice, broccoli, and carrots, served on a white plate on a dining table. What can | do for you today? Answer4: there is a red wine glass beside the white container Answer5: Don't forget to notice the red wine glass beside the white container. It's a nice touch. This is a great idea. I'll have to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this.
2306.14565#114
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
114
Interact with a {self.setting} system using {self.language} queries to answer a question. Use interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be two types: (1) execute, which executes {self.language} code on the {self.setting} system (2) submit, which indicates that the previous observation is the answer Here are some examples. Question: What are the names and grades for each high schooler? Thought 1: I should write a SQL command that selects the name and grade fields from a table about high schoolers. Action 1: execute[SELECT name, grade FROM high_schoolers] Observation 1: exist Thought 2: The high_schoolers table doesn’t exist, I need to check what tables are available to use. Action 2: execute[SHOW TABLES] Observation 2: Thought 3: There is a table called highschooler. I should check what information this table contains. Action 3: execute[DESC highschooler] Observation 3: ’text’, ’YES’, ”, None, ”), (’grade’, ’int’, ’YES’, ”, None, ”)] Thought 4: The highschooler table contains
2306.14898#114
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
115
Suppose you are a smart teacher, after looking at the image information above, please score the above answers(0-10) according to the following criteria: 1: whether the response directly follows the instruction 2: whether the response is accurate concerning the image content. Output format: relevancy: score of answer1: reason: score of answer2: reason: score of answer3: reason: score of answer4: reason: score of answerS: reason: # accuracy: score of answer1: # reason: # score of answer2: # reason: # score of answer3: # reason: # score of answer4: # reason: # score of answerS: # reason: Figure 22: The first example for the GPT4-Assisted Visual Instruction Evaluation (Part2). 30 # GPT4-Assisted Visual Instruction Evaluation
2306.14565#115
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
116
Output: relevancy: score of answer1: 10 reason: Answer 1 directly responds to the instruction by stating that the red wine glass isn't present in the image. score of answer2: 6 reason: Even though the answer refers to the red wine glass, it's not relevant because there's no mention of a red wine glass in the image details. score of answer3: 2 reason: Answer 3 does not directly respond to the instruction. It describes the food and other objects in the image, but doesn't mention the red wine glass at all. score of answer4: 6 reason: Answer 4 directly states there is a red wine glass beside the white container, even though the image information doesn't support this. score of answers: 1 reason: Answer 5 doesn't address the instruction and instead offers multiple vague, irrelevant compliments about the idea. accuracy: score of answer1: 10 reason: Answer 1 accurately refers to the fact that there's no mention of a red wine glass in the image details. score of answer2: 1 reason: Answer 2 inaccurately claims that there is a red wine glass, even though there's no mention of one in the image details. score of answer3: 8 reason: Even though it doesn't mention the wine glass, Answer 3 provides a detailed and accurate description of
2306.14565#116
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
116
# Instruction Message # Query: # "<Task Episode Instruction>" Query: "<Task Episode Instruction>" Observation Message # Output: <Observation from Intercode Environment> Table 12: Initial, Query, and Retry Prompts for ReAct Strategy Evaluation. Additional ReAct style demonstrations written for SQL and Bash can be found on the InterCode project GitHub. # Initial Message You are an expert Capture the Flag (CTF) player. # Instruction Message You are in the ’/ctf’ folder of a standard Ubuntu machine. Instruction>. Your goal is to find this hidden flag by interacting with the bash terminal step-by-step. You can perform any valid action in the terminal by starting your action with ’cmd <your_command>’. For bash commands, prepend ’/bin/bash -c ¨<your_command>¨’, for python commands, prepend ’python3 -c ¨<your_command>¨’ and so on. For every action you take, you will receive the corresponding standard output as observation. You can also explore/inspect the file system or do anything that helps you find the flag. found the flag, you can choose to submit it by printing ’submit <your_flag>’. Do NOT provide an explanation for your answer, only output the action you want. Observation Message # Output: <Observation from Intercode Environment>
2306.14898#116
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
117
mention of one in the image details. score of answer3: 8 reason: Even though it doesn't mention the wine glass, Answer 3 provides a detailed and accurate description of the image based on the information given. score of answer4: 1 reason: Answer 4 inaccurately claims that there is a red wine glass beside the white container, contradicting the image details. score of answers: 1 reason: Answer 5 neither addresses the instruction nor accurately reflects the image content. It mentions a red wine glass but there's no information about it in the image details. Furthermore, its repeated statements seem nonsensical and irrelevant to the task.
2306.14565#117
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
117
Observation Message # Output: <Observation from Intercode Environment> # Table 13: Initial, Query, and Retry Prompts for Capture the Flag Evaluation. 28 Plan Message For the following user question, let’s first understand the problem and devise a plan to solve the problem. by step. Your plan should describe a sequence of {self.language} queries you can write to determine the answer. question. Question: What are the names and grades for each high schooler? Plan: 1. 2. 3. high schooler. ... Check what tables are available for use. Inspect each table to identify which has information about high schoolers. Use the table to write a query that selects the name and grade fields for each Execute Plan Message You will now execute your own plan. {self.language} queries to answer a question. following information: Interact with a {self.setting} system using Per turn, you will be given the ``` Observation: Standard output from executing previous instruction Step: ``` Your response should be {self.language} code, nothing else, formatted as follows: ```{self.language} Your {self.language} code here ``` # Observation Message Output: <Observation from Intercode Environment> <Next step to execute from the plan> Step:
2306.14898#117
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14898
118
# Observation Message Output: <Observation from Intercode Environment> <Next step to execute from the plan> Step: Post-Plan Refinement Message You have finished executing the plan, but it seems like there are still issues with your answer. Please continue to work on getting the correct answer. Per turn, you will be given the following information: ``` Observation: Standard output from executing previous instruction ``` Your response should be {self.language} code, nothing else, formatted as follows: ```{self.language} Your {self.language} code here ``` Table 14: Initial, Query, and Retry Prompts for Plan & Solve Strategy Evaluation. Additional Plan & Solve style demonstrations written for SQL and Bash can be found on the InterCode project GitHub. Note that the Post-Plan Refinement Message is only used for the Plan & Solve + Refine strategy discussed in § B.3. It is not used for the original Plan & Solve strategy. 29
2306.14898#118
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
123
Figure 24: The second example for the GPT4-Assisted Visual Instruction Evaluation (Part1). The image is not included in the text prompt. 32 # GPT4-Assisted Visual Instruction Evaluation Here is the question for the image: With respect to the image, is the statement "The napkin is wrapped in the plastic spoon" true or false? # Answer1: True # Answer2: True Answers3: In the image, the statement is false. The napkin is not wrapped in the plastic spoon. Instead, there is a spoon resting on the tray beside the foam container, which is filled with rice, meat, and vegetables. # Answer4: 10000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000 Answer5: With respect to the image, is the statement "The napkin is wrapped in the plastic spoon" true or false? With respect to the image, is the statement "The plastic spoon is wrapped in the napkin" true or false? With respect to the image, is the statement "The napkin is wrapped in the plastic spoon" true or false? With respect to the image, is the statement "The plastic spoon is wrapped in the napkin" true or false? With respect to the image, is the statement "The napkin is wrapped in the plastic spoon"
2306.14565#123
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
124
Suppose you are a smart teacher, after looking at the image information above, please score the above answers(0-10) according to the following criteria: 1: whether the response directly follows the instruction 2: whether the response is accurate concerning the image content. # Output format: # relevancy: score of answer1: # reason: # score of answer2: # reason: # score of answer3: # reason: # score of answer4: # reason: # score of answer5: # reason: # accuracy: score of answer1: # reason: # score of answer2: # reason: # score of answer3: # reason: # score of answer4: # reason: # score of answer5: # reason: Figure 25: The second example for the GPT4-Assisted Visual Instruction Evaluation (Part2). 33 # GPT4-Assisted Visual Instruction Evaluation
2306.14565#124
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
125
Output: relevancy: score of answer1: 10 reason: Answer1 gives a straight answer to the question as either true or false. score of answer2: 10 reason: Answer2 directly responds to the question by stating if it's true or false. score of answer3: 10 reason: While Answer3 provides more details than necessary, it does answer the question directly and completely. score of answer4: reason: Answer4 does not address the question at all. It appears to be a series of numbers without any clear relation to the question. score of answerS: reason: Answer5 does not directly answer the question. It repeats the question multiple times without providing an answer. Accuracy: score of answer1: reason: Answer1 is not accurate. The image information doesn't state that the napkin is wrapped in the spoon. score of answer2: reason: Answer2 is incorrect. The image information doesn't state that the napkin is wrapped in the spoon. score of answer3: 10 reason: Answers is accurate. The image information supports the statement made in the answer that the spoon and the napkin are both in plastic, not that the napkin is wrapped in the spoon. score of answer4: reason: The response is not relevant to the question and does not refer to the image content. It is
2306.14565#125
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
127
Figure 26: The second example for the GPT4-Assisted Visual Instruction Evaluation (Part3). 34 Instruction: Can you locate the water bottle placed near the yellow tennis ball on the ground? MiniGPT,: The water bottle is located near the yellow tennis ball on the ground. Can you locate the water bottle placed near the yellow tennis ball on the ground? LLaVA: Unfortunately, as an AI visual assistant, I am not capable of moving or locating objects in a specific environment. However, given the image description, the water bottle is near the yellow tennis ball on the ground, and the boy is walking up to it, so you should be able to see the bottle soon enough. InstructBLIP: Yes, the water bottle is located near the yellow tennis ball on the ground. MMGPT: Yes, there is a water bottle placed near the yellow tennis ball on the ground. Ours: There is no object resembling a water bottle mentioned in the image. This question introduces a new object that doesn't exist in the image. (a) All LMMs except ours produce inconsistent descriptions with the images. MiniGPT4 also has repetitive sentences.
2306.14565#127
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
128
Instruction: You might notice a painting of sunset hanging over stove in the kitchen. MiniGPT4: ‘There is a painting of a sunset hanging above the stove in the kitchen. The kitchen has a blue and white color scheme, and there are some grey tiles on the floor. There is a white oven on one wall, a fridge on another wall, and a microwave oven on the third wall, The kitchen also has a small dining table with two chairs, and there is a small window above the table. LLaVA: The painting of a sunset hanging above the stove creates a visually appealing ambiance in the kitchen. It adds warmth and a sense of tranquility to the space. The kitchen appears to be empty, with a stove, a sink, and some cabinets. Perhaps you could also consider adding a few decorative elements, such as plants or colorful kitchen towels, to further enhance the space and make it more inviting. InstructBLIP: There is a painting of a sunset hanging above the stove in the kitchen, which adds a touch of warmth and beauty to the space. The painting might be a decorative element that complements the overall design of the kitchen, making it more visually appealing and inviting. MMGPT: What is the color of the sunset in the painting? Ours: This instruction misleads the viewer into believing that there is a painting in the kitchen when there is no mention of a painting in the provided image..
2306.14565#128
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
130
Instruction: Identify the pink flowers blooming at the bottom left corner of the image. Are they a seasonal variety or do they grow year-round?. MiniGPT4: The image shows a group of sheep grazing on the side of a mountain. The mountain is covered in rocks and there are clouds in the sky. The sheep appear to be grazing on a field of green grass. The clouds in the sky are quite ominous and it appears to bea stormy day. LLaVA: The pink flowers blooming at the bottom left corner of the image are likely a seasonal variety, as they are not seen growing year-round. Their appearance in the image| ‘suggests that the photo was taken during a specific season when these flowers are in bloom. InstructBLIP: The pink flowers are blooming year-round. MMGPT: Identify the pink flowers blooming at the bottom left comer of the image. Are they a seasonal variety or do they grow year-round? If they are seasonal, when do they bloom? If they are year-round, what is the name of the plant. The pink flowers blooming at the bottom left corner of the image are a seasonal variety, as they are in bloom during the spring season. Ours: This instruction is misleading because there are no pink flowers n the image. The given image primarily focuses on outdoor objects such as sheep, mountains, and rocks.
2306.14565#130
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
131
(c) MiniGPT4, LLaVA, and Multimodal-GPT respond with a long descriptive answer about the "pink flowers", which do not exist in the image. Although InstructBLIP’s answer is brief, it’s still false. Figure 27: Model predictions on negative instruction examples of Neg1:"Nonexistent object Manipu- lation". RED represents the nonexistent elements in the images. 35 Instruction: Is the white curtain on top of the grey cat actually a colorful curtain with butterfly patterns? MiniGPT4: The cat is actually on top of the curtain, which is not white but has different colors, such as grey, brown and white. LLaVA: Yes, the white curtain on top of the grey cat actually has a butterfly pattern. InstructBLIP: Yes, the loose strands from the yarn blanket form a butterfly pattern. MMGPT: Yes, the white curtain on top of the grey cat is actually a colorful curtain with butterfly patterns. Ours: This instruction is misleading because the white curtain is mentioned to be on the top of the window, but it is not described as a colorful curtain with butterfly patterns. # (a) All loose
2306.14565#131
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
132
# (a) All loose LMMs except ours produce inconsistent descriptions with the images. Our model can point out that the strands do not form a butterfly pattern. Instruction: Are the windows on the brick building red-framed? MiniGPT,: Yes. The windows on the brick building are red-framed. LLaVA: Yes, the windows on the brick building have red frames InstructBLIP: The windows on the building are white-framed. MMGPT: yes Ours: The windows on the building are white-framed. # (b) All windows LMMs except ours produce false attributes of windows in the images. Our model can point out that the are white instead of red. Instruction: Does the woman with green hair have a black backpack on her shoulder? MiniGPT,: Yes, the woman has a black backpack on her shoulder. LLaVA: Yes, the woman with green hair has a black backpack on her shoulder. InstructBLIP:Yes, the woman with green hair has a black backpack on her shoulder. MMGPT: Yes, the woman with green hair has a black backpack on her shoulder. Ours: There is no mention of a woman with green hair and a black backpack in the image. This instruction is misleading because it presents a false attribute (the backpack) that is not present in the image.
2306.14565#132
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
133
(c) All LMMs except ours produce inconsistent descriptions with the images. Our model can point out that the woman with green hair doesn’t have a black backpack on her shoulder. Figure 28: Model predictions on negative instruction examples of Neg2:"Existent object Manipula- tion". RED represents the wrong attributes of existent objects in the images. 36 Instruction: What objects are on the toddler’s feet? MiniGPT4: Two teddy bears are sitting on the toddler's feet. LLaVA: The toddler has a teddy bear and a shoe on their feet. InstructBLIP: 2 objects are on the toddler's feet: socks and shoes. MMGPT: The toddler’s feet are covered with a blanket, and there is a teddy bear on top of the blanket, Ours: White socks (a) Object Detection Task. MiniGPT4, LLaVa, and MMGPT can’t locate the objects correctly.
2306.14565#133
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
134
(a) Object Detection Task. MiniGPT4, LLaVa, and MMGPT can’t locate the objects correctly. Multi-choice VQA Task. InstructBLIP can’t follow the instruction well by generating anything. LLaVA’s seems reasonable, but it incorrectly detects the clouds in the image. Instruction: From the image, tell me what part of the room is tiled. MiniGPT4: The room is tiled with black and white tiles. LLaVA: The tiled area in the room is the kitchen. InstructBLIP: The part of the room that is tiled is the floor. MMGPT: The answer is kitchen. Ours: The floor of the room is tiled. (b) # answer (c) Referential Expression Grounding Task. Other LMMs can’t directly address the instruction by giving a general answer, "kitchen," while our model can clearly predict the "floor". Figure 29: Positive instruction demos with different tasks and predictions from different models. 37
2306.14565#134
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
135
Figure 29: Positive instruction demos with different tasks and predictions from different models. 37 Instruction: Detect any unusual elements in the image. MiniGPT«: There are some onions, carrots, mushrooms, and parsley in the image. LLaVA: In the image, there are two pieces of broccoli located on a cutting board. Additionally, there is a group of mushrooms and carrots on the table. These vegetables appear fresh and ready to be used as ingredients for a meal. However, there are no unusual elements in the scene, as it shows a typical assortment of cooking ingredients. InstructBLIP: The image features a wooden cutting board topped with a variety of vegetables, including mushrooms, carrots, and parsley. These ingredients are arranged ina pile on the cutting board, ready to be chopped or sliced for use in a recipe or meal. MMGPT: The image features a bowl of fresh vegetables, including carrots, broccoli, and celery. There is also a knife placed on top of the bowl. Ours: There are no unusual elements in the image.
2306.14565#135
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
136
Ours: There are no unusual elements in the image. Image Anomaly Detection Task. Our model and LLaVA point out there is a normal image without unusual while MiniGPT4, InstructBLIP, and MMGPT describe the image instead of addressing the instruction. Instruction: The coffee table looks wooden. True or False? MiniGPT4: False LLaVA: The coffee table is wooden. InstructBLIP: The coffee table looks wooden. True. MMGPT: True Ours: True. (a) elements, (b) Visual Entailment Task. (c) Object Interaction Analysis Task. All LMMs except ours describe the wrong location of the book. LLaVA generates long text with unrelated information to address the instruction. Figure 30: Positive instruction demos with different tasks and predictions from different models. 38 Instruction: What is the color and state of the horse's bridle? MiniGPT4: The horse's bridle is black and is located on a red truck.. LLaVA: The horse has a white bridle, and its harness appears to be thrown onto the trailer. InstructBLIP: The horse's bridle is brown. MMGPT: The horse's bridle is red and is in a ready-to-wear state. Ours: The bridle is pink and attaches on the horse's head.
2306.14565#136
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
137
(a) Dense Description Task. All LMMs except ours incorrectly recognize the color of the bridle. InstructBLIP also fails to answer the "state of the horse’s bridle" in the instruction. Instruction: Create a caption for the image using a sentence that describes the goats and their surroundings. MiniGPT«4: The goats are grazing on the hillside, surrounded by rocks and mountains. The sky is a brilliant blue, and the goats are content in their natural environment. LLaVA: In the image, a group of four goats is seen grazing on grass and plants on top of a rocky hillside. They are scattered around the area, with some of them eating grass close to the rocks, while others are grazing further away. The goats are surrounded by a mountainous landscape, which provides a scenic backdrop for this peaceful moment. InstructBLIP: A herd of goats is grazing on a rocky hillside, surrounded by mountains in the background. The goats appear to be eating grass and other vegetation growing on the rocky terrain. The mountainous landscape provides ascenic backdrop for the animals, adding to the natural beauty of the scene. MMGPT: The caption should be between 5-10 words. A herd of goats grazing in the mountains. Ours: Flock of goats feasting on grass ona hilltop with large rocky mountains and green shrubs in the background, with a blue sky and cloudy weather.
2306.14565#137
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
138
and green shrubs in the background, with a blue sky and cloudy weather. Image Caption Task. LLaVA and InstructBLP fail to follow the instruction for generating one caption. they generate long text with several sentences. Instruction: What action is the person closest to the frisbee performing? MiniGPT4: The person closest to the frisbee is throwing it LLaVA: The person closest to the frisbee is preparing to catch it, with their arms outstretched. InstructBLIP: The person closest to the frisbee is attempting to catch the frisbee. MMGPT: The person closest to the frisbee is throwing the frisbee. Ours: The person is throwing the frisbee. (b) Instead, (c) Activity Recognition Task. Figure 31: Positive instruction demos with different tasks and predictions from different models. 39
2306.14565#138
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
139
Net sales of Sega Sammy Holdings from fiscal] z sn 60 of the Criminal justi Share of voluntary work in the sports sector 2010 to 2019 (in billion Japanese yen) gs (1994) from 2001/02 in the last year in England from 2005/06 _ 40 + ; to 2018/19 0.20 Ed ° Fd ws 8 ° 3 Share of respondents ° 6 a 8 0.00 Net sales in billion Japanese yi o et pt wt oi® Year Peyrine slits ping end peorcnes Question: Did net sales increase in 2012 to 2013 and Question: Did the number of police stop and Question: please summary the statistics and relations begin to drop again? searches peak during 2009/08? of the chart. . Answer: No.Net sales dropped in 2012 to 2013 and Answer: No.The number of police stop and searches Answer: I can see that 2005 until 2019 the volunteer began to rise again. peaked during 2008/09. work has almost his the 0.20 mark. In the years 2011 until 2012 is the lowest time in the years studied. Direct contribution of travel and tourism There isn’t much variation between all of the years. to employment in the Asia Pacific region in 2019 , by country (in 1,000s) Pakistan : National debt from 2015 to 2025 in relation
2306.14565#139
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
140
much variation between all of the years. to employment in the Asia Pacific region in 2019 , by country (in 1,000s) Pakistan : National debt from 2015 to 2025 in relation to gross domestic product (GDP) Number of jobs In thousands 00 02 of O06 8 National debt in relation to GDP. Country Question: Did China contribute more than 25,000 jobs to the direct Question: Is the title of the bar plot "Least valuable Question: What is the prediction for Pakistan's employment of travel and tourism in the Asia Pacific region in 2019, France brands in 2020"? national debt in relation to GDP from 2020 onwards? while India contributed less than 25,000 jobs? . Answer: No. the title of the bar plot "Most valuable A®Swer: From 2020 onwards, the national debt of Answer: No.The direct contribution of travel and tourism to Italy brands in 2020"? Pakistan (in relation to GDP) is predicted to decline. employment in the Asia Pacific region in 2019 was carried by China and India who both contributed higher than 25,000 jobs each.
2306.14565#140
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.13394
0
3 2 0 2 c e D 6 ] V C . s c [ 3 v 4 9 3 3 1 . 6 0 3 2 : v i X r a # MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models Chaoyou Fu1, Peixian Chen1, Yunhang Shen1, Yulei Qin1, Mengdan Zhang1, Xu Lin1 Jinrui Yang1, Xiawu Zheng2, Ke Li1*, Xing Sun1, Yunsheng Wu1, Rongrong Ji2 1Tencent Youtu Lab 2Xiamen University [email protected], {tristanli, winfredsun}@tencent.com # Abstract
2306.13394#0
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13304
1
# Abstract Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs’ question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs’ internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs’ ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs’ pre-training data, enabling a more precise evaluation of LLMs’ tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available for the broader scientific community on GitHub 2. # Introduction
2306.13304#1
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
1
[email protected], {tristanli, winfredsun}@tencent.com # Abstract Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writ- ing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME1. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of pub- lic datasets for evaluation, the annotations of instruction- answer pairs are all manually designed. The concise in- struction design allows us to fairly compare MLLMs, in- stead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantita- tive statistics. A total of 30 advanced MLLMs are compre- hensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improve- ment, but also reveals the potential directions for the subse- quent model optimization. # 1. Introduction
2306.13394#1
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
1
# The Blavatnik School of Computer Science, Tel Aviv University {ohad.rubin,joberant}@cs.tau.ac.il # Abstract Retrieval-augmented language models (LMs) have received much attention recently. How- ever, trained typically the retriever is not jointly as a native component of the LM, but added to an already-pretrained LM, which lim- its the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located poten- tially tens of thousands of tokens before. In- formation from retrieved chunks is fused into the LM representations to predict the next tar- get chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probabil- ity of the next chunk, according to a reference LM. We evaluate RPT on four long-range lan- guage modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subse- quently perplexity across the board compared to strong baselines.
2306.13421#1
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
2
# Introduction Large Language Models (LLMs) have demonstrated superior performance in a myriad of NLP tasks [3, 7, 37, 36, 47, 54]. These models have captured vast amounts of knowledge from enormous and diverse corpora during pre-training. After instruction fine-tuning [8, 38, 1], they have demonstrated impressive capabilities in information-seeking question answering [57, 23]. Despite their remarkable performance, LLMs face several challenges. For example, they are susceptible to hallucinations— generating plausible yet ungrounded information—which can mislead users and affect content integrity [58, 17, 4]. Additionally, they exhibit weaknesses in numerical reasoning, an essential skill in numerous real-life applications [12, 31, 35, 25, 43, 11]. These limitations highlight the need for techniques that can enhance LLMs’ question-answering abilities.
2306.13304#2
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
2
# 1. Introduction The thriving of Large Language Model (LLM) has paved a new road to the multimodal field, i.e., Multimodal Large Language Model (MLLM) [8, 9, 13, 20, 23, 37]. It refers to using LLM as a brain to process multimodal informa- tion and give reasoning results [49]. Equipped with the powerful LLM, MLLM is expected to address more com- plex multi-modal tasks [13, 27, 31, 38, 45, 54]. The three representative abilities of LLM [55], including instruction *Ke Li is the corresponding author. 1MME is collected by Xiamen University for academic research only. This is the v1 version of MME, which will be updated as MLLMs evolve. The data application manner and online leaderboards are released at https://github.com/BradyFU/Awesome- Multimodal- Large-Language-Models/tree/Evaluation.
2306.13394#2
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
2
Book or Semantically Similar Long text Lexically Similar Chunk 13 ‘Chunk 100 Asa kid, Lt. John found a dead dog; ‘| since then, crimson |" always unnerved him ‘Chunk 201 Lt. John looked around, "Another victim, The Crimson Murderer strikes again." The killer left a room full of evidence, a puzzle for forensics. Past States Retrieve Causal Language Model ‘Chunk 202 “1 bet the forensic guys would love this.” Training Signal Target Figure 1: Retrieval-Pretrained Transformer (RPT) is a language model for long texts (e.g., books) trained from scratch with a native retrieval ability. RPT takes a chunk of text as input, retrieves semantically-relevant chunks from the past to better predict the next chunk, and fuses these retrieved chunks into its representations. On top of a standard LM loss, the retriever is trained to retrieve chunks that increase the probability of the next chunk according to a reference LM. # Introduction
2306.13421#2
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
3
Recent research has shown that these issues can be mitigated by augmenting LLMs with external tools, such as retrieval augmentation [50, 15], math tools [48, 66, 28], and code interpreters [11, 55]. For example, a Wolfram math plugin can enhance numerical reasoning [60], and a verified database can mitigate hallucinations by providing up-to-date fact-checked knowledge [42]. However, existing evaluation methodologies struggle to distinguish whether the model is simply recalling pre-trained information or truly utilizing external tools for problem-solving [32]. This challenge arises, in part, because the external data used for evaluation may have already been exposed to LLMs during the pre-training phase [45]. This exposure can lead to a biased evaluation of LLMs’ tool-use abilities, as the models could just use their ingrained knowledge and their reasoning abilities, bypassing the use of external tools. As a result, these evaluations cannot accurately reflect the true competency of the ∗These authors contributed equally to this work. 2https://github.com/night-chen/ToolQA Preprint. Under review.
2306.13304#3
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]
2306.13394
3
following [40], In-Context Learning (ICL) [10], and Chain- of-Thought (CoT) [43] are also manifested in multimodal- ity. For example, Flamingo [8] turns on multimodal ICL, which can adapt to new tasks by giving a few examples. PaLM-E [13] achieves amazing OCR-free math reasoning via CoT. GPT-4V [37] shows even more ability in a va- riety of complex reasoning tasks [46]. MiniGPT-4 [59] implements GPT-4[37]-like instruction following capabil- ities, such as converting images into corresponding website codes, by introducing multimodal instruction tuning. These emergent abilities of MLLMs are exciting and imply that a new dawn has broken in artificial intelligence.
2306.13394#3
MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.
http://arxiv.org/pdf/2306.13394
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
cs.CV
Project page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models
null
cs.CV
20230623
20231206
[]
2306.13421
3
# Introduction Large language models (LMs) have had immense success recently (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Touvron et al., 2023), becoming a useful tool across disciplines. However, their success comes at a computational cost, due to increasing parameter counts for storing world knowledge and growing context lengths that enable access to distant information, but incur a quadratic complexity penalty. Retrieval-augmented language modeling (RALM) alleviates this cost (Khandelwal et al., 2020; Yogatama et al., 2021; Borgeaud et al., 2022; Ram et al., 2023), as pre- cise retrieval of relevant information can reduce memory and computation requirements. More- over, RALM is beneficial for factuality, freshness and generalization without necessitating retraining, simply by swapping the retrieval index (Guu et al., 2020; Lewis et al., 2020; Huang et al., 2023). However, past work on RALM has by and large not trained the retriever as a first-class component of the LM. In some cases (Khandelwal et al., 2020; Yogatama et al., 2021; Borgeaud et al., 2022), the retriever was used only at test time, or remained
2306.13421#3
Long-range Language Modeling with Self-retrieval
Retrieval-augmented language models (LMs) have received much attention recently. However, typically the retriever is not trained jointly as a native component of the LM, but added to an already-pretrained LM, which limits the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located potentially tens of thousands of tokens before. Information from retrieved chunks is fused into the LM representations to predict the next target chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probability of the next chunk, according to a reference LM. We evaluate RPT on four long-range language modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subsequently perplexity across the board compared to strong baselines.
http://arxiv.org/pdf/2306.13421
Ohad Rubin, Jonathan Berant
cs.CL
null
null
cs.CL
20230623
20230623
[ { "id": "2004.05150" } ]
2306.13304
4
∗These authors contributed equally to this work. 2https://github.com/night-chen/ToolQA Preprint. Under review. Corpus - oe 7 Kobe Bryant was drafted by | Question: What team did Kobe Bryant start his NBA career with? te 7a onaeam the Charlotte Hornets with ‘sthethrioeat dan Retrieve the 13th pick of the 1996 ate td neal da oe nee! > draft, but his draft rights ‘as were immediately traded to the Los Angeles Lakers... | [ross Reasoning with retrieval Directly input ( Using tools or only memorizing?) GS —— as * G (GS) . y y LLM with implicit knowledge Los Angeles Lake Los Angeles Lakes Figure 1: Pre-trained on vast range of corpus, LLMs possess extensive knowledge, which may overlap with evaluation data. This overlap poses a significant challenge to current evaluation methods, as it becomes difficult to discern whether the model is merely recalling pre-trained information or genuinely employing external tools for problem-solving. models. We need a fair and explicit way to check if LLMs are really good at problem-solving with tools or if they are just using their memorized information.
2306.13304#4
ToolQA: A Dataset for LLM Question Answering with External Tools
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available to the broader scientific community on GitHub.
http://arxiv.org/pdf/2306.13304
Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang
cs.CL, cs.AI
null
null
cs.CL
20230623
20230623
[ { "id": "2302.13971" }, { "id": "2305.16504" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2212.10511" }, { "id": "2306.07193" }, { "id": "2211.05100" }, { "id": "2301.12652" }, { "id": "2206.14858" }, { "id": "2209.14610" }, { "id": "2203.09735" }, { "id": "2210.11416" }, { "id": "2112.09118" }, { "id": "2209.07686" }, { "id": "2110.14168" }, { "id": "2208.03299" }, { "id": "2102.13019" }, { "id": "2303.11381" }, { "id": "2204.05862" }, { "id": "2211.10435" }, { "id": "2207.13332" }, { "id": "2210.12810" }, { "id": "2303.04671" }, { "id": "2303.05398" }, { "id": "2210.17517" }, { "id": "2112.04359" }, { "id": "2303.17580" }, { "id": "2208.05051" }, { "id": "2305.15334" }, { "id": "2304.09842" }, { "id": "2303.11366" }, { "id": "2303.12712" }, { "id": "2302.12813" }, { "id": "2303.09014" }, { "id": "2205.12255" } ]