doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.14824 | 32 | In addition to multimodal grounding tasks, we evaluate the modelâs ability to understand image regions or objects users refer to via inputting bounding boxes. Compared with previous multimodal
8
the front most cow to the the front most cow to the right of other cows. right of other cows. Grounded MLLM Grounded MLLM <p> It </p> <box> i] <p> It </p> <locgq7> <l0Cg95> <box> <locyg,> </box> is I, <locton> </box> is the giraffe in the middle. <p> It </p> <box> <locg27> <lOCgg5> </box> is (1) Zero-shot evaluation (2) Few-shot evaluation
Figure 5: The input format of referring expression generation evaluation under (1) zero-shot and (2) few-shot settings. The bounding boxes shown in the image are for visualization purposes.
LLMs that can only refer image regions or objects to the model via detailed text descriptions, directly referring to image regions using its bounding boxes is more effective and reduces ambiguity. | 2306.14824#32 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 32 | Figure 3: Growth in Success Rate with increase in number of interaction turns across models configured with Try Again prompting strategy for InterCode-Bash and SQL tasks.
Try Again (n = 10) ReAct (n = 10) Plan & Solve SR Turns Error % SR Turns Error % SR Turns Error % SQL Bash 47.3 46.5 7.25 6.15 46.4 24.9 58.7 20.5 5.30 4.40 6.94 20.4 49.1 28.0 4.29 6.65 16.2 53.3
Table 4: Comparison of different prompting strategies across the entire InterCode-SQL and InterCode- Bash datasets using gpt-3.5-turbo as the base model. Turns refers to the average number of turns taken for a single task episode. For Try Again and ReAct, the max number of turns n = 10. The highest Success Rate, fewest Turns, and lowest Error % are highlighted per dataset since they reflect more accuracy and efficient task solving. Best metrics are in bold.
explicit reasoning frameworks such as ReAct and Plan & Solve policies generally achieve higher success rates (SQL: 47.3% â 58.7%) with fewer turns and a higher rate of admissible commands. | 2306.14898#32 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 33 | # 6.2 Main Results
How do LMMs perform on public datasets? We compare our model against the baseline models on POPE in Tab.3. The results show that current LMMs may not work well with open-ended negative instructions. In contrast, the highest scores of our model demonstrate that LRV-Instruction exhibits robustness to visual hallucination, matching or surpassing the performance of 13B counterparts. From Tab.2, we found both finetuned LMMs on LRV-Instruction outperform original ones in the zero-shot evaluations. Additionally, Finetuned-Mplug-Owl exceeds Finetuned-MiniGPT4 because Mplug-Owl can do the LoRA training to improve the language ability. We also calculate the accuracy on positive and negative samples of MME in the right chart of Tab.2. The improvement in the positive samples is because LRV-Instruction has more diverse tasks than mPLUG-Owl datasets and MiniGPT4 datasets. The improvement in the negative samples demonstrates the value of LRV-Instruction dataset to equip the model with the ability to say ânoâ and provide correct answers. The completed results
7 | 2306.14565#33 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 33 | LLMs that can only refer image regions or objects to the model via detailed text descriptions, directly referring to image regions using its bounding boxes is more effective and reduces ambiguity.
We evaluate the model on the referring expression generation task, which aims to generate unambigu- ous text descriptions of speciï¬c objects or regions within the bounding box. We employ the widely used RefCOCOg dataset [MHT+15] to evaluate the modelâs performance under both zero-shot and few-shot settings, showcasing its adaptability in different scenarios.
# 4.2.1 Evaluation Setup
The model is tasked with generating an associated text description for an object or region given its location tokens of the bounding boxes (e.g., â<box><loc1><loc2></box>â). Beneï¬ting from the uniï¬ed input format, we use â<p> It </p><box><loc1><loc2></box> isâ as prompt to encourage the model to predict its text description. Figure 5 (1) and (2) demonstrate the input format for zero-shot and few-shot referring expression generation, respectively. Following previous works, we report results using METEOR and CIDEr metrics. The image resolution is 224Ã224. Greedy search is used for decoding.
# 4.2.2 Results | 2306.14824#33 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 33 | Different tasks present different learning challenges. An important skill to solving the InterCode- SQL task is the ability to discover context and construct actions conditionally based on information revealed in prior observations. Given that InterCode-SQL task instructions are phrased most com- monly as questions, adapting to the task setting and new information discovered along the way puts more emphasis on error correction and context discovery. On the other hand, the more declarative and multi-step nature of the InterCode-Bash task instructions is more aptly solved by planning and modular task completion. These distinctions manifest in the Plan & Solve strategyâs performance gap between the InterCode-SQL and InterCode-Bash tasks; while Plan & Solve encourages a model to decompose problems into more manageable steps, the strategy is less favorable towards adjusting on the fly in response to execution feedback. Example trajectories supporting these claims are in § B.4. | 2306.14898#33 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 34 | 7
Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl 6.58 GAVIE-ACCURACY (0-10) GAVIE-RELEVANCY (0-10) 8.46 4.14 5.81 4.36 6.11 5.93 7.34 0.91 1.79 4.84 6.35 Human Expert1 (1-4) Human Expert2 (1-4) Human Expert3 (1-4) 3.48 3.58 3.33 2.61 2.23 2.58 2.87 2.07 2.89 3.00 2.48 2.94 1.90 1.05 1.38 2.90 2.27 2.91
Table 4: Comparison results on our evaluation set evaluated by GAVIE. Ours means Finetuned mPLUG-Owl-7B. All the LMMs are 7B versions to make a fair comparison.
Model InstructBLIP-13B LLaVA-13B MiniGPT4-13B mPLUG-Owl-7B Ours-7B Ours-7B-Psu Accuracy 0.62 0.47 0.42 0.41 0.64 0.60 | 2306.14565#34 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 34 | # 4.2.2 Results
Table 4 presents the zero-shot and few-shot results of referring expression generation on RefCOCOg. We compare KOSMOS-2 with a ï¬netuned listener-speaker model, which introduces an added reward- based module (SLR). Our model obtains impressive zero-shot performance on referring expression generation, and even outperforms ï¬netuned SLR by 1.1 CIDEr scores. Moreover, when prompted with fewshot demonstrations, KOSMOS-2 shows further improvements, highlighting its in-context learning ability.
RefCOCOg Meteor CIDEr Model Setting SLR[YTBB17] SLR+Rerank[YTBB17] Finetuning Finetuning 15.4 15.9 59.2 66.2 Zero-shot Few-shot (k = 2) Few-shot (k = 4) 12.2 13.8 14.1 60.3 62.2 62.3 KOSMOS-2
Table 4: Results of referring expression generation on RefCOCOg.
# 4.3 Perception-Language Tasks
In addition to multimodal grounding and referring tasks, we also evaluate KOSMOS-2 on the vision- language tasks following KOSMOS-1. In particular, we perform zero-shot evaluations on two popular
9 | 2306.14824#34 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 34 | More adaptive reasoning is favorable. Compared to "imperative" reasoning paradigms such as Plan & Solve which prescribe a relatively rigid procedure, more flexible frameworks like ReAct, which do not enforce any particular logical formula or roadmap, are more conducive to eliciting a broader set of reasoning capabilities. However, while ReActâs performance is generally superior to Plan & Solve, tasks solved by both strategies with gpt-3.5-turbo make up 57% (407/708) and 27.6% (21/76) of the union of all successfully solved InterCode-SQL and InterCode-Bash tasks respectively. This discrepancy highlights a trade-off between the guidance and structural constraints that are inherent to prompting strategies; schemes that draw out specific reasoning patterns often overlook other equally useful capabilities. InterCodeâs interactive coding task can serve as a strong litmus test toward more adaptable, variegated model reasoning.
# 5.3 New tasks & datasets opportunities
InterCodeâs task formulation, modular design, flexible task construction, and use of virtual containers enable task designers to manifest new, complex, code-driven tasks, where completion is much more
8 | 2306.14898#34 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 35 | Table 5: Zero-shot evaluation on GQA. Ours-7B means Finetuned mPLUG-Owl-7B. Ours-7B-Psu means we finetune mPLUG-Owl on pseudo instruction data by [41].
on shown in Tab. 11.12. We further explore the LMMsâ performance in the common scenario of visual question-answering (VQA). As shown in Tab. 5, the results suggest that our method (Finetuned mPLUG-Owl) achieves on-par performance with InstructBLIP in a generic VQA setting.
How do LMMs perform on LRV-Instruction? We show the evaluation results on our dataset in Tab. 4. Among the baselines, InstructBLIP achieves better results than other LMM baselines because its visual instructions are collected from a wide variety of publicly available datasets. LLaVA [26] utilizes the GPT-assisted approach to generate visual instructions, but its performance is much worse. This is probably because its synthetic answers from GPT4 are generally longer and may involve irrelevant information. As a comparison, our model outperforms the existing LMM baselines by a large margin, benefiting from the rich composition of our dataset and better prompt design.
# 6.3 Detailed Analysis | 2306.14565#35 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 35 | 9
tasks, including image captioning and visual question answering. Image captioning requires the model to generate a text description of the given image, whereas visual question answering seeks to answer a natural language question based on an image. In order to have a fair comparison with KOSMOS-1, we report results without instruction tuning.
# 4.3.1 Evaluation Setup
For image captioning, we evaluate the model on the widely used Flickr30k Karpathy split test set. We employ beam search for caption generation, with a beam size of 5. We report results using CIDEr [VLZP15] metrics evaluated by COCOEvalCap3. We use the prompt âAn image ofâ to generate the image description.
For visual question-answering, we evaluate zero-shot performance on the test-dev set of VQAv2. Greedy search is used for decoding. We report VQA scores obtained from VQAv2 evaluation server4. âQuestion: {question} Answer: {answer}â is used as the prompt for the dataset. The image resolution is 224Ã224 for both two tasks.
# 4.3.2 Results | 2306.14824#35 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 35 | Bash DOS Partition Table Offset Sector: 0 Units are in 5az-byte sectors Slot Start End Length Description @ iption: You should utilize the 'dds2-alpine-flag.img.gzâ @ he file with the flag is named âdown-at-the-bottom.txtâ. The âsleuthkit' tools can be used for help. Flag: picoCTF{ for3nsicater_novic3_f5565e7b} âimary Table (#0) Unallocated Linux (@x83) Agent Fea fs -0 2068 =r <p dds2-alpine. flagcing | grep dom-at-the-botton. txt Agent /bin/bash -c "gunzip dds2-alpine. flag. img.g2" Bash dds2-alpine. flag. img: DOS/MBR boot sector; partition 1 Bash + ID=0x83, active, start-CHS (0xe,32,33), end-CHS >. Je 102511 root/down-at~the-botton.txt (x10,81,1), startsector 2048, 260096 sectors ~* eee Agent Agent ne , 7 7 7 âcat -o 2048 dds2-alpine. flag. img 18291 apt-get install | 2306.14898#35 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 36 | # 6.3 Detailed Analysis
Does GPT4-Assisted Visual Instruction Evaluation align with Human Evaluation? We select three human experts specializing in the field of NLP to evaluate the predictions from LMMs with four options for the scores (1) Very Poor, (2) Poor, (3) Good, (4) Excellent. To evaluate the results quantitatively, we assign different scores for the options: Very Poor=1, Poor=2, Good=3, Excellent=4. More implementation details are shown in the appendix. From Tab. 4, all experts agree that the output from our model is the best, followed by InstructBLIP in second place, and MMGPT performs the worst. The observation aligns with the GAVIE evaluation results. | 2306.14565#36 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 36 | # 4.3.2 Results
We present the zero-shot performance on Flickr30k and VQAv2 in Table 5. KOSMOS-2 exhibites comparable overall performance to the KOSMOS-1, showing a slight improvement on Flickr30k while experiencing a marginal decrease on VQA. While KOSMOS-2 introduces new capabilities of grounding and referring, the model still achieves competitive performance on perception-language tasks.
Model Flickr30k VQAv2 CIDEr VQA acc. FewVLM [JCS+22] METALM [HSD+22] Flamingo-3B [ADL+22] Flamingo-9B [ADL+22] KOSMOS-1 KOSMOS-2 31.0 43.4 60.6 61.5 65.2 66.7 - 41.1 49.2 51.8 46.7 45.6
Table 5: Zero-shot image captioning results on Flickr30k test set and zero-shot visual question answering results on VQAv2 test-dev set. We report results of KOSMOS-2 and KOSMOS-1 without instruction tuning.
# 4.4 Language Tasks | 2306.14824#36 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14565 | 37 | Is GPT4-Assisted Evaluation Stable? We execute GAVIE 5 times on each instruction and evaluate the predictions from different LMMs. We leverage Standard Deviation (STD) to measure the stability of GAVIE. From Tab. 7 (left), we observe that STD ranges from 0.65 to 2.46. The ACCURACY and RELEVANCY scores of an instance from GPT4 may vary between different times, but they always belong to the same grade level. According to completed results from Tab. 9, RELEVANCY has four grade levels: (1) The response is completely relevant (9-10), (2) The response is mostly relevant (6-8), (3) The response is partly relevant (3-5), (4) The response is seldom relevant (0-2). ACCURACY has four grade levels: (1) The response is completely accurate (9-10), (2) The response has minor errors (6-8), (3) The response is partly accurate (3-5), (4) The response is mostly or completely wrong (0-2). | 2306.14565#37 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 37 | # 4.4 Language Tasks
We evaluate KOSMOS-2 on eight language tasks, such as cloze and completion tasks (StoryCloze, HellaSwag), Winograd-style tasks (Winograd, Winogrande), commonsense reasoning (PIQA), and three SuperGLUE benchmark [WPN+19] datasets (BoolQ, CB, and COPA). We report the zero- shot results in Table 6. Compared with KOSMOS-1, KOSMOS-2 achieves similar performance on StoryCloze, HellaSwag, Winograd, Winogrande, and PIQA, experiences a decrease in performance on CB, but shows improvement on BoolQ and COPA. In summary, KOSMOS-2 demonstrates the acquisition of new capabilities while experiencing comparable performance on language tasks. This illustrates the potential of the model in balancing and expanding its skills across different domains.
# 5 Conclusion
We present KOSMOS-2, a multimodal large language modal, that can ground to the visual world. Speciï¬cally, we pre-train KOSMOS-2 by augmenting the multimodal corpora used in KOSMOS-1 with GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created by extracting | 2306.14824#37 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 37 | Figure 4: GPT-4âs interaction trajectory for a binary exploitation CTF task. This requires proficiency in Bash and Python, among additional knowledge and reasoning. Orange text and arrows highlight the feedback that the model attends to in generating the next action. In last step, agent submits flag.
attainable through interaction. We draw inspiration from Capture the Flag (CTF) [15], a competitive cybersecurity game that requires expertise in coding, cryptography (i.e. binary exploitation, forensics), reverse engineering, and recognizing security vulnerabilities to accomplish the primary objective of discovering encrypted "flags" concealed within code snippets or file systems. Compared to InterCode- Bash & -SQL, CTF is much more complicated, requiring an agent to exercise knowledge of multiple coding languages, modularize a higher-order objective into sub-problems, construct multi-step plans towards solving each problem, and adjust strategy when a plan fails to yield any useful insights. | 2306.14898#37 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 38 | How do LMMs perform at the different semantic levels of hallucination? As shown in Tab 6, all baselines perform better on Neg1 (Nonexistent Object Manipulation) than Neg2 (Existent Object Manipulation) and Neg3 (Knowledge Manipulation). From the visual perspective, existent object manipulations with wrong attributes in Neg2 are more challenging than adding nonexistent objects from images to instructions in Neg1. For example, in Fig. 2, it may be straightforward to find that the "hot air balloon" does not appear in the image. However, "woman" does exist in the second example of Fig. 2 while she is not in the blue pants and pink shirts, which requires a fine-grained understanding of the visual content. Therefore, a more powerful vision encoder is needed for future LMMs. Knowledge manipulation is challenging because current LMMs are finetuned on general images without specific knowledge. In contrast, our model greatly improves at all semantic levels, which benefits from our diverse instruction tuning data.
8 | 2306.14565#38 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 38 | 3https://github.com/salaniz/pycocoevalcap 4https://eval.ai/challenge/830/overview
10
Model Story Cloze Hella Swag Winograd Winogrande PIQA BoolQ CB COPA LLM KOSMOS-1 KOSMOS-2 72.9 72.1 72.0 50.4 50.0 49.4 71.6 69.8 69.1 56.7 54.8 55.6 73.2 72.9 72.9 56.4 56.4 62.0 39.3 44.6 30.4 68.0 63.0 67.0
Table 6: Zero-shot performance comparisons of language tasks between KOSMOS-2, KOSMOS-1 and LLM. LLM uses the same text data and training setup to reimplement a language model as KOSMOS-1. We report results of KOSMOS-2 and KOSMOS-1 without instruction tuning. Results of KOSMOS-1 and the LLM baseline are from [HDW+23]. | 2306.14824#38 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 38 | We establish InterCode-CTF, a new dataset consisting of 100 CTF objectives from picoCTF [42]. Following the interactive coding task formulation, each task instance in InterCode-CTF is given as a <instruction, assets, hidden flag> tuple. We first construct a Bourne Shell within an Ubuntu OS as the task environment. Here, InterCodeâs use of virtual containers is crucial, as necessary actions can be irreversibly damaging on real systems (i.e. rm -rf, sudo access). Per task instance, the associated assets (e.g., images, executables, code), necessary for task completion, are copied into the OS file system. Given this setting, a task worker must understand the given material and investigate the assets to develop potential solutions. Executing a successful approach must be done across multiple steps with various conditionals, where the execution feedback of a prior step could have a significant effect on the next step. Figure 4 spotlights the diverse skills needed for CTF.
# 6 Discussion | 2306.14898#38 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 39 | 8
Categories Metric Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl Neg1 Neg2 Neg3 ACCURACY(GPT4) ACCURACY(GPT4) ACCURACY(GPT4) 8.90 6.50 6.25 3.72 2.57 2.30 2.09 1.42 1.56 5.50 2.18 2.38 1.13 0.96 0.94 4.20 2.46 2.57 Neg1 Neg2 Neg3 RELEVANCY(GPT4) 8.96 RELEVANCY(GPT4) 8.46 RELEVANCY(GPT4) 8.21 5.94 2.53 2.40 4.83 1.82 1.78 7.22 2.73 2.39 2.24 1.19 0.98 5.35 3.16 2.87
Table 6: Completed evaluation results on Neg1: Nonexistent Object Manipulation, Neg2: Existent Object Manipulation and Neg3: Knowledge Manipulation by GAVIE. | 2306.14565#39 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 39 | and associating noun phrases and referring expressions in the caption to the objects or regions in the scene. KOSMOS-2 enables new capabilities of perceiving image regions and grounding text output to the visual world, which makes grounding as a foundation capability of MLLMs in many downstream applications. Experimental results demonstrate that KOSMOS-2 achieves impressive results on language and vision-language tasks evaluated in KOSMOS-1, grounding tasks including phrase grounding and referring expression comprehension, and referring tasks such as referring expression generation.
# Acknowledgement
Some examples (such as Figure 1) are taken from the WHOOPS corpus [BGBH+23].
# Ethics Statement
The model presented in this paper is intended for academic and research purposes. The utilization of the model to create unsuitable material is strictly forbidden and not endorsed by this work. The accountability for any improper or unacceptable application of the model rests exclusively with the individuals who generated such content. We also put Microsoft AI Principles5 into practice when developing the models.
# References | 2306.14824#39 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 39 | # 6 Discussion
Conclusion. We have developed InterCode, a novel lightweight framework that facilitates interaction between Language Models and the underlying environment, enabling them to mimic the human approach to language-to-code generation. Our framework has shown promising results when applied to state-of-the-art models using different prompting styles. It effectively leverages the capabilities of LMs to break down complex tasks and recover from errors within a secure and isolated environment. The ability to seamlessly convert existing datasets into the interactive format using InterCodeEnv API, and furthermore, the Bash and SQL environments, empowers task designers to construct new tasks to unlock the plethora of challenges that await in the space of interactive coding.
Limitations and future directions. We point out several current limitations of InterCode. At this time, the number of InterCode based environments is limited to Bash, SQL, and Python action spaces and datasets; within the near future, we plan to expand the number of offerings to cover a wider set of programming languages and datasets that should further deliver on InterCodeâs purported promises of efficient and expressive task construction. Second, the CTF dataset is limited to just four task instances due to our manual curation procedure. We hope to release more formal work soon that provides a more thorough analysis of the reasoning and collaboration challenges of the CTF task along with a more extensive dataset for evaluation purposes.
9
# Acknowledgements | 2306.14898#39 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 40 | Table 6: Completed evaluation results on Neg1: Nonexistent Object Manipulation, Neg2: Existent Object Manipulation and Neg3: Knowledge Manipulation by GAVIE.
Metric Accuracy-STD Accuracy-Mean Ratio Accpos Accneg Ours MiniGPT4 InstructBLIP mPLUG-Owl LLaVA MMGPT 2.42 2.46 2.42 1.96 2.37 0.65 6.60 3.76 5.29 0.87 3.80 4.84 All Pos Pos:Neg=2:1 Pos:Neg=1:1 Pos:Neg=1:2 All Neg 0.97 0.95 0.92 0.87 0.10 0.05 0.50 0.85 0.86 0.98
Table 7: (left): Evaluation of the stability of GAVIE. STD means standard deviation. Completed results are shown in Tab. 9. (right): Results of different composition ratios in instruction tuning. | 2306.14565#40 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 40 | # References
[ADL+22] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Ne- matzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems, 2022.
[AHR+22] Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. CM3: A causal masked multimodal model of the Internet. ArXiv, abs/2201.07520, 2022. | 2306.14824#40 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 40 | 9
# Acknowledgements
We thank Xiao Liu for the Vicuna/Alpaca APIs, Carlos Jimenez and Yuhan Liu for trying our code, and Princeton NLP Group for helpful discussion and feedback in general. We acknowledge support from the National Science Foundation under Grant No. 2107048. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
# References
[1] M. Agarwal, T. Chakraborti, Q. Fu, D. Gros, X. V. Lin, J. Maene, K. Talamadupula, Z. Teng, and J. White. Neurips 2020 nlc2cmd competition: Translating natural language to bash commands. In H. J. Escalante and K. Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demonstration Track, volume 133 of Proceedings of Machine Learning Research, pages 302â324. PMLR, 06â12 Dec 2021. URL https://proceedings.mlr. press/v133/agarwal21b.html.
[2] R. Agashe, S. Iyer, and L. Zettlemoyer. Juice: A large scale distantly supervised dataset for open domain context-based code generation. ArXiv, abs/1910.02216, 2019. | 2306.14898#40 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 41 | How do LMMs perform at the different composition ratios in training data? In Tab. 7 (right), we investigate how LRV-Instruction addresses hallucination issues with different ratios of positive and negative samples in the training set. Inspired by [19], we instruct the model to produce âYesâ or âNoâ and use classification accuracy on our evaluation set. Accpos is the accuracy on the positive instruction set, while Accneg is the accuracy on the negative instruction set. From Tab. 7 (right), we found that Accneg increases with more negative samples, which verifies our hypothesis that the hallucination problem of current LMMs is due to the lack of negative instructions. Besides, with a balanced ratio (pos:neg=1:1), the model performs the best in both positive and negative sets. | 2306.14565#41 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 41 | [BGBH+23] Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, and Roy Schwartz. Breaking common sense: WHOOPS! a vision-and-language benchmark of synthetic and compositional images. ArXiv, abs/2303.07274, 2023.
[BPK+22] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset, 2022.
# 5https://www.microsoft.com/ai/responsible-ai
11
[CLY+19] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In European Conference on Computer Vision, 2019.
[CSL+21] Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, and Geo rey E. Hinton. Pix2seq: A language modeling framework for object detection. ArXiv, abs/2109.10852, 2021. | 2306.14824#41 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 41 | [3] R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, E. Chu, J. H. Clark, L. E. Shafey, Y. Huang, K. Meier-Hellstern, and et al. Palm 2 technical report, 2023.
[4] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, and C. Sutton. Program synthesis with large language models, 2021.
[5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
[6] R. Bunel, M. Hausknecht, J. Devlin, R. Singh, and P. Kohli. Leveraging grammar and reinforce- ment learning for neural program synthesis, 2018. | 2306.14898#41 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 42 | Use Pseudo Dense Captions instead of GT from Visual Genome to Generate Instructions. To demonstrate the scalability of our dataset, we use pseudo-dense captions generated by GRiT [41] to replace the GT captions in the Visual Genome dataset. We remove the images, whose detected objects by GRiT are less than 15 to ensure GPT4 has enough visual information when generating visual instructions. From Tab. 5, we found finetuning on pseudo captions can also improve the performance compared to the original mPLUG-Owl. This demonstrates that our visual instruction generation method can be further scaled up without groundtruth dense captions.
# 7 Conclusion | 2306.14565#42 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 42 | [DKG+22] Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang. Coarse-to-ï¬ne vision-language pre-training with fusion in the backbone. ArXiv, abs/2206.07643, 2022.
[DXS+23] Danny Driess, F. Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Ho Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Peter R. Florence. Palm-e: An embodied multimodal language model. ArXiv, abs/2303.03378, 2023. | 2306.14824#42 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14565 | 43 | # 7 Conclusion
In this work, we constructed LRV-Instruction, a large and diverse dataset containing 400k visual instructions, covering 16 vision and language tasks with both positive and negative instructions in different semantic levels and styles. With LRV-Instruction, we comprehensively investigated the hallucination of existing LMMs and empirically validated its effectiveness in a more robust visual instruction tuning. In addition, we propose GAVIE, a novel approach to evaluate visual instruction tuning without requiring human-labeled groundtruth answers and can be easily adapted to different instruction formats. We hope our work can help address the unexpected hallucination issues of LMMs. Future directions include replacing the vision encoders in current LMMs with more powerful visual models to match the capabilities of multimodal GPT4 and investigation of other biases of LMMs to develop more robust models.
# References
[1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â23736, 2022.
9 | 2306.14565#43 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 43 | [HDW+23] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, and Furu Wei. Language is not all you need: Aligning perception with language models. ArXiv, abs/2302.14045, 2023.
[HMVLB20] Matthew Honnibal, Ines Montani, Soï¬e Van Landeghem, and Adriane Boyd. spaCy: Industrial-strength Natural Language Processing in Python. 2020.
[HSD+22] Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shum- ing Ma, and Furu Wei. Language models are general-purpose interfaces. ArXiv, abs/2206.06336, 2022.
[HSLS22] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor, 2022. | 2306.14824#43 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 43 | [9] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, | 2306.14898#43 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 44 | 9
[2] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In Computer VisionâECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14, pages 382â398. Springer, 2016.
[3] Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo, March 2023.
[4] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Love- nia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023. | 2306.14565#44 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 44 | [JCS+22] Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2763â2775, Dublin, Ireland, May 2022. Association for Computational Linguistics.
[JMC+23] Woojeong Jin, Subhabrata Mukherjee, Yu Cheng, Yelong Shen, Weizhu Chen, Ahmed Hassan Awadallah, Damien Jose, and Xiang Ren. Grill: Grounded vision- language pre-training via aligning text and image regions. ArXiv, abs/2305.14676, 2023.
[KSL+21] Aishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. Mdetr - modulated detection for end-to-end multi-modal understanding. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1760â 1770, 2021. | 2306.14824#44 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14565 | 45 | [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[6] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558â3568, 2021.
[7] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, 2023. | 2306.14565#45 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 45 | [KZG+16] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32â73, 2016.
[LHV+23] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The ï¬an collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023.
[LLSH23] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. ArXiv, abs/2301.12597, 2023.
12
[LLWL23] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. | 2306.14824#45 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 45 | [10] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2023.
[11] X. Chen, C. Liu, and D. X. Song. Execution-guided neural program synthesis. In International Conference on Learning Representations, 2018.
[12] X. Chen, M. Lin, N. Schärli, and D. Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023.
[13] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
[14] C. B. Clement, D. Drain, J. Timcheck, A. Svyatkovskiy, and N. Sundaresan. Pymt5: multi-mode translation of natural language and python code with transformers, 2020.
10 | 2306.14898#45 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 46 | [8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Instructblip: Towards general-purpose Wang, Boyang Li, Pascale Fung, and Steven Hoi. vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
[9] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.
[10] Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023. | 2306.14565#46 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 46 | [LYY+19] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visual- bert: A simple and performant baseline for vision and language. ArXiv, abs/1908.03557, 2019.
[LZZ+22] Liunian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. Grounded language-image pre-training. In CVPR, 2022.
[MHT+15] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana-Maria Camburu, Alan Loddon Yuille, and Kevin P. Murphy. Generation and comprehension of unambiguous object descriptions. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11â20, 2015.
[MWH+22] Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, and Furu Wei. TorchScale: Transformers at scale. CoRR, abs/2211.13184, 2022. | 2306.14824#46 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 46 | 10
[15] C. Cowan, S. Arnold, S. Beattie, C. Wright, and J. Viega. Defcon capture the flag: defending vul- nerable code from intense attack. In Proceedings DARPA Information Survivability Conference and Exposition, volume 1, pages 120â129 vol.1, 2003. doi: 10.1109/DISCEX.2003.1194878.
[16] L. Dong and M. Lapata. Language to logical form with neural attention, 2016.
[17] K. Ellis, M. Nye, Y. Pu, F. Sosa, J. Tenenbaum, and A. Solar-Lezama. Write, execute, assess: Program synthesis with a repl, 2019.
[18] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, T. Liu, D. Jiang, and M. Zhou. Codebert: A pre-trained model for programming and natural languages, 2020. | 2306.14898#46 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 47 | [11] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023.
[12] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700â6709, 2019.
[13] MV Koroteev. Bert: a review of applications in natural language processing and understanding. arXiv preprint arXiv:2103.11943, 2021.
[14] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32â73, 2017. | 2306.14565#47 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 47 | [Ope23] OpenAI. Gpt-4 technical report. 2023.
[PWC+15] Bryan A. Plummer, Liwei Wang, Christopher M. Cervantes, Juan C. Caicedo, J. Hock- enmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase corre- spondences for richer image-to-sentence models. International Journal of Computer Vision, 123:74â93, 2015.
[SBV+22] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wight- man, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022.
[VLZP15] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus- based image description evaluation. In CVPR, pages 4566â4575, 2015. | 2306.14824#47 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 47 | [19] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy. The pile: An 800gb dataset of diverse text for language modeling, 2020.
[20] D. Hendrycks, S. Basart, S. Kadavath, M. Mazeika, A. Arora, E. Guo, C. Burns, S. Puranik, H. He, D. Song, and J. Steinhardt. Measuring coding challenge competence with apps. NeurIPS, 2021.
[21] J. Huang, C. Wang, J. Zhang, C. Yan, H. Cui, J. P. Inala, C. Clement, and N. Duan. Execution- In Proceedings of the Fourth based evaluation for data science code generation models. Workshop on Data Science with Human-in-the-Loop (Language Advances), pages 28â36, Abu Dhabi, United Arab Emirates (Hybrid), Dec. 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.dash-1.5. | 2306.14898#47 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 48 | [15] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023.
[16] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
[17] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language- image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, pages 12888â12900. PMLR, 2022.
10
[18] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. | 2306.14565#48 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 48 | [WCC+23] Wen Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Y. Qiao, and Jifeng Dai. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. ArXiv, abs/2305.11175, 2023.
[WMH+22] Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, and Furu Wei. Foundation transformers. CoRR, abs/2210.06423, 2022.
[WPN+19] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019. | 2306.14824#48 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 48 | [22] H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search, 2020.
[23] S. K. Lahiri, A. Naik, G. Sakkas, P. Choudhury, C. von Veh, M. Musuvathi, J. P. Inala, C. Wang, and J. Gao. Interactive code generation via test-driven user-intent formalization, 2022.
[24] Y. Lai, C. Li, Y. Wang, T. Zhang, R. Zhong, L. Zettlemoyer, S. W. tau Yih, D. Fried, S. Wang, and T. Yu. Ds-1000: A natural and reliable benchmark for data science code generation. ArXiv, abs/2211.11501, 2022.
[25] H. Le, Y. Wang, A. D. Gotmare, S. Savarese, and S. C. H. Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314â21328, 2022. | 2306.14898#48 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 49 | [19] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023.
[20] Zongxia Li, Paiheng Xu, Fuxiao Liu, and Hyemi Song. Towards understanding in-context learning with contrastive demonstrations and saliency maps. arXiv preprint arXiv:2307.05052, 2023.
[21] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer VisionâECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740â755. Springer, 2014.
[22] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023. | 2306.14565#49 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 49 | [WYM+22] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Interna- tional Conference on Machine Learning, 2022.
[YPY+16] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. Modeling context in referring expressions. ArXiv, abs/1608.00272, 2016.
[YTBB17] Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg. A joint speaker-listener- reinforcer model for referring expressions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3521â3529. IEEE Computer Society, 2017.
13
# A Hyperparameters
The training hyperparameters of KOSMOS-2 are listed in Table 7. | 2306.14824#49 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 49 | [26] C.-H. Lee, O. Polozov, and M. Richardson. KaggleDBQA: Realistic evaluation of text-to-SQL parsers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2261â2273, Online, Aug. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.176. URL https://aclanthology.org/2021. acl-long.176.
[27] J. Li, B. Hui, G. Qu, B. Li, J. Yang, B. Li, B. Wang, B. Qin, R. Cao, R. Geng, N. Huo, C. Ma, K. C. C. Chang, F. Huang, R. Cheng, and Y. Li. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls, 2023. | 2306.14898#49 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 50 | [23] Fuxiao Liu, Hao Tan, and Chris Tensmeyer. Documentclip: Linking figures and main body text in reflowed documents. arXiv preprint arXiv:2306.06306, 2023.
[24] Fuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. Visual news: Benchmark and challenges in news image captioning. arXiv preprint arXiv:2010.03743, 2020.
[25] Fuxiao Liu, Yaser Yacoob, and Abhinav Shrivastava. Covid-vts: Fact extraction and verification on short video platforms. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 178â188, 2023.
[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[27] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023. | 2306.14565#50 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 50 | 13
# A Hyperparameters
The training hyperparameters of KOSMOS-2 are listed in Table 7.
Hyperparameters Image embedding number Location tokens 64 1,024 Training steps Warmup steps Optimizer Learning rate Learning rate decay Adam β Weight decay 60,000 375 AdamW 2e-4 Linear (0.9, 0.98) 0.01 Batch size of text corpora Batch size of original image-caption pairs Batch size of grounded image-text pairs Batch size of interleaved data 93 1,117 1,117 47
# Table 7: Training hyperparameters of KOSMOS-2
The instruction tuning hyperparameters are listed in Table 8.
Hyperparameters Training steps Warmup steps Learning rate Batch size of language instruction data Batch size of vision-language instruction data Batch size of grounded image-text pairs 10,000 375 1e-5 117 351 & grounded instruction data 1404 Batch size of text corpora Batch size of interleaved data 30 15
Table 8: Instruction tuning hyperparameters of KOSMOS-2
# B Templates for Grounded Instruction Data
Table 9 presents the instruction templates of expression generation based on its associated bounding boxes during instruction tuning. | 2306.14824#50 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14565 | 51 | [28] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[29] OpenAI. Gpt-4 technical report. 2023.
[30] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[31] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748â8763. PMLR, 2021. | 2306.14565#51 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 51 | # B Templates for Grounded Instruction Data
Table 9 presents the instruction templates of expression generation based on its associated bounding boxes during instruction tuning.
⢠"What is <p> it </p><box><loc1><loc2></box>? It is {expression}." ⢠"What is <p> this </p><box><loc1><loc2></box>? This is {expression}." ⢠"Describe <p> this object </p><box><loc1><loc2></box>. This object is {expression}." ⢠"<p> It </p><box><loc1><loc2></box> is {expression}." ⢠"<p> This </p><box><loc1><loc2></box> is {expression}." ⢠"<p> The object </p><box><loc1><loc2></box> is {expression}."
Table 9: Instruction templates used for expression generation.
14
# C Examples of GRIT
We present some examples of the GRIT corpus in Figures 6â9. The grounded image-text pairs span over various domains and contain different numbers of objects. | 2306.14824#51 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 51 | [29] Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de Masson dâAutume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal, A. Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de Freitas, K. Kavukcuoglu, and O. Vinyals. Competition-level code generation with AlphaCode. Science, 378(6624):1092â1097, dec 2022. doi: 10.1126/science.abq1158. URL https://doi.org/10.1126%2Fscience.abq1158.
[30] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies: Language model programs for embodied control, 2023.
11 | 2306.14898#51 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 52 | [32] Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156, 2018.
[33] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556â2565, 2018.
[34] Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. In Pro- ceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2443â2449, 2021.
11
[35] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7464â7473, 2019. | 2306.14565#52 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 52 | Figure 6: Example from GRIT. Caption: âA serving of kale and roasted vegetable salad on an aluminium tray served with a small white bowl ï¬led with creamy light green avocado Caesar dressingâ.
Figure 7: Example from GRIT. Caption: âA Keto Chicken Nugget being dipped into a bowl of keto honey mustard.â.
15
Figure 8: Example from GRIT. Caption: âSolar cells on a red roof are in the foreground. The Sydney skyline is in the background.â. | 2306.14824#52 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 52 | 11
[31] C.-Y. Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74â81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-1013.
[32] X. V. Lin, C. Wang, L. Zettlemoyer, and M. D. Ernst. NL2Bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 2018. European Language Resources Association (ELRA). URL https: //aclanthology.org/L18-1491. | 2306.14898#52 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 53 | [36] Benny J Tang, Angie Boggust, and Arvind Satyanarayan. Vistext: A benchmark for semantically rich chart captioning. arXiv preprint arXiv:2307.05356, 2023.
[37] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[38] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566â4575, 2015.
[39] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022. | 2306.14565#53 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 53 | Figure 9: Example from GRIT. Caption: âWoman standing outdoors in a city landscape and wearing a hijab. Her arm is around a young girl who is hugging her side. The background is blurred.â.
# D More Examples of KOSMOS-2
As illustrated in Figure 10, multimodal referring capability used for visual dialogue can unlock potential in human-AI interaction. In Figure 11, our approach demonstrates its in-context learning ability for ï¬ne-grained object detection using both text and image descriptions. Figure 12 and Figure 13 showcase more selected examples, including grounded visual question answering, grounded image captioning, and multimodal referring.
16
What is the name of this building? â> Sydney Opera House. When it was completed? In 1973. Whatisthenameof â» this? Sydney Harbour Bridge. What he is doing? The man is riding a horse. How about this man? He is holding a flag. Which person is this one facing to? The person on the right side of the fence. â>
Figure 10: Examples of visual dialogue generated from KOSMOS-2.
17 | 2306.14824#53 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 53 | [33] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B. Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou, M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng, S. Fu, and S. Liu. Codexglue: A machine learning benchmark dataset for code understanding and generation. CoRR, abs/2102.04664, 2021.
[34] D. Merkel. Docker: lightweight linux containers for consistent development and deployment. Linux journal, 2014(239):2, 2014.
[35] A. Ni, S. Iyer, D. Radev, V. Stoyanov, W. tau Yih, S. I. Wang, and X. V. Lin. Lever: Learning to verify language-to-code generation with execution, 2023.
[36] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong. Codegen: An open large language model for code with multi-turn program synthesis. ICLR, 2023. | 2306.14898#53 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 54 | [40] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022.
[41] Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280, 2022.
[42] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. | 2306.14565#54 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 54 | Figure 10: Examples of visual dialogue generated from KOSMOS-2.
17
<p>a downy woodpecker</p> <box><l0Cg7><l0Cg49></box> â Description of three toed woodpecker: it has black and white stripes throughout the body With text and a yellow crown. Description â- of downy woodpecker: it has white spots on its black wings and some red on its crown. description <p>a downy woodpecker</p> Ge. <box><locy45><lOCgsg></box> â> q Vv ce N With image - description This is a downy woodpecker <p>a downy woodpecker</p> Ge. <box><loc,35><lOCgsg></box> â>- q Vv
Figure 11: Examples of object detection with multimodal descriptions from KOSMOS-2.
18 | 2306.14824#54 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 54 | [37] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics, 2002.
[38] R. Puri, D. S. Kung, G. Janssen, W. Zhang, G. Domeniconi, V. Zolotov, J. Dolby, J. Chen, M. Choudhury, L. Decker, V. Thost, L. Buratti, S. Pujar, S. Ramji, U. Finkler, S. Malaika, and F. Reiss. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks, 2021.
[39] F. Shi, D. Fried, M. Ghazvininejad, L. Zettlemoyer, and S. I. Wang. Natural language to code translation with execution, 2022.
[40] N. Shinn, F. Cassano, E. Berman, A. Gopinath, K. Narasimhan, and S. Yao. Reflexion: Language agents with verbal reinforcement learning, 2023.
[41] K. Tusar. sqlite3mysql, 2018. URL https://github.com/techouse/sqlite3-to-mysql. | 2306.14898#54 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 55 | [43] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
[44] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
12
# A Appendix
# A.1 GAVIE Evaluation | 2306.14565#55 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 55 | Figure 11: Examples of object detection with multimodal descriptions from KOSMOS-2.
18
Input Prompt Completion Input Prompt Completion Question: Where is the coach? Answer: <p>The coach</p> The coach is standing in the middle of the field. Question: Where is i'? Answer: In a microwave oven. (1) Grounded question answering Question: What does it say? Answer: Question: What makes this image weird? Answer: The image is weird because the snowman is sitting next toa campfire in the snow. "Je suis Charlie!" is written in French on a chalkboard. Question: What will happen to it? Answer: You are driving this car. To pick up that man, you need to go through the intersection and turn right. It will melt away as it warms up. (2) Multimodal referring via bounding boxes
Figure 12: Examples generated from KOSMOS-2.
19 | 2306.14824#55 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 55 | [41] K. Tusar. sqlite3mysql, 2018. URL https://github.com/techouse/sqlite3-to-mysql.
[42] C. M. University. picoCTF, 2013. URL https://picoctf.org/.
[43] L. Wang, W. Xu, Y. Lan, Z. Hu, Y. Lan, R. K.-W. Lee, and E.-P. Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models, 2023.
[44] X. Wang, Y. Wang, Y. Wan, F. Mi, Y. Li, P. Zhou, J. Liu, H. Wu, X. Jiang, and Q. Liu. Compilable neural code generation with compiler feedback, 2022.
[45] X. Wang, H. Peng, R. Jabbarvand, and H. Ji. Leti: Learning to generate from textual interactions, 2023.
[46] Y. Wang, W. Wang, S. Joty, and S. C. H. Hoi. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation, 2021. | 2306.14898#55 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 56 | 12
# A Appendix
# A.1 GAVIE Evaluation
We show two full examples of the text prompt for GAVIE in (i) Fig. 21, 22, 23 and (ii) Fig. 24, 25, 26. We first leverage the bounding boxes and dense captions as the "visual" input. We provide the human instructions and responses from different models in Fig. 22 and Fig. 25. Furthermore, we ask GPT4 to pretend as a smart teacher and score (0-10) the answers according to the image content and instructions. There are two criteria. (1) Accuracy: whether the response is accurate concerning the image content. (2) Relevancy: whether the response directly follows the instruction. After that, GPT4 is required to generate a score and reason. Fig. 23 and Fig. 26 show the full evaluation output from GAVIE.
# A.1.1 GPT4-Assisted Visual Instruction Evaluation (GAVIE) vs. Human Evaluation
This section provides insights into the GAVIE via human evaluation. Here, we randomly select 40 image-instruction instances from the evaluation set. The human assessment is carried out by three experts specializing in NLP. The questionnaire consists of 40 questions randomly shuffled for each expert. The questionnaire takes about 20 minutes to complete on average. Each question includes an instruction, an image, and responses from 4 different LMMs. We provide instructions for experts as follows: | 2306.14565#56 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14824 | 56 | Figure 12: Examples generated from KOSMOS-2.
19
Input Prompt Input Prompt Input Prompt oe cago is a large bird with a black and white body and a yellow beak. It has large, dark wings and a pointed tail. It stands on a stump and spreads its wings, displaying its magnificent plumage. Completion Describe this bird in detail: Six fighter jets fly in formation in the sky, creating a beautiful cloud trail. Completion Describe this image in detail: Aman is standing in front of a fire, blowing the flames out of his mouth. He is in the middle of a tropical setting with palm trees surrounding him. Completion Describe this image in detail:
Figure 13: Examples of grounded image captioning generated from KOSMOS-2.
20 | 2306.14824#56 | Kosmos-2: Grounding Multimodal Large Language Models to the World | We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new
capabilities of perceiving object descriptions (e.g., bounding boxes) and
grounding text to the visual world. Specifically, we represent refer
expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where
object descriptions are sequences of location tokens. Together with multimodal
corpora, we construct large-scale data of grounded image-text pairs (called
GrIT) to train the model. In addition to the existing capabilities of MLLMs
(e.g., perceiving general modalities, following instructions, and performing
in-context learning), Kosmos-2 integrates the grounding capability into
downstream applications. We evaluate Kosmos-2 on a wide range of tasks,
including (i) multimodal grounding, such as referring expression comprehension,
and phrase grounding, (ii) multimodal referring, such as referring expression
generation, (iii) perception-language tasks, and (iv) language understanding
and generation. This work lays out the foundation for the development of
Embodiment AI and sheds light on the big convergence of language, multimodal
perception, action, and world modeling, which is a key step toward artificial
general intelligence. Code and pretrained models are available at
https://aka.ms/kosmos-2. | http://arxiv.org/pdf/2306.14824 | Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei | cs.CL, cs.CV | 20 pages | null | cs.CL | 20230626 | 20230713 | [
{
"id": "2301.13688"
},
{
"id": "2210.08402"
},
{
"id": "2304.08485"
},
{
"id": "1905.00537"
}
] |
2306.14898 | 56 | [47] Z. Wang, G. Zhang, K. Yang, N. Shi, W. Zhou, S. Hao, G. Xiong, Y. Li, M. Y. Sim, X. Chen, Q. Zhu, Z. Yang, A. Nik, Q. Liu, C. Lin, S. Wang, R. Liu, W. Chen, K. Xu, D. Liu, Y. Guo, and J. Fu. Interactive natural language processing, 2023.
[48] Z. Wang, S. Zhou, D. Fried, and G. Neubig. Execution-based evaluation for open-domain code generation, 2023.
[49] S. Yao, R. Rao, M. Hausknecht, and K. Narasimhan. Keep calm and explore: Language models for action generation in text-based games. In Empirical Methods in Natural Language Processing (EMNLP), 2020.
12
[50] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. | 2306.14898#56 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 57 | "As for each question, there are an instruction, an image, and several answers. Suppose you are a smart teacher, please score the answers according to the two criteria. (1) Accuracy: whether the response is accurate concerning the image content. (2) Relevancy: whether the response directly follows the instruction without unrelated answers. There are four options for the scores (1) Very Poor, (2) Poor, (3) Good, (4) Excellent."
Evaluator Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl Expert1(1-4) Expert2(1-4) Expert3(1-4) 3.48 3.58 3.33 2.61 2.23 2.58 2.87 2.07 2.89 3.00 2.48 2.94 1.90 1.05 1.38 2.90 2.27 2.91 GAVIE-Accuracy (0-10) 6.58 GAVIE-Relevancy (0-10) 8.46 4.14 5.81 4.36 6.11 5.93 7.34 0.91 1.79 4.84 6.35
Table 8: GAVIE vs. Human Evaluation. GAVIE scores roughly align with the expert ratings. Numbers highlighted with red, orange, black, green, blue, and magenta indicate rank 1 to 6. | 2306.14565#57 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 57 | [51] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models, 2023.
[52] S. Yao, H. Chen, J. Yang, and K. Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. In ArXiv, preprint.
[53] P. Yin and G. Neubig. Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4553â4559, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1447. URL https://aclanthology.org/P19-1447.
[54] P. Yin, W.-D. Li, K. Xiao, A. Rao, Y. Wen, K. Shi, J. Howland, P. Bailey, M. Catasta, H. Michalewski, A. Polozov, and C. Sutton. Natural language to code generation in inter- active data science notebooks, 2022. | 2306.14898#57 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 58 | To evaluate the results quantitatively, we assign different scores for the options: Very Poor=1, Poor=2, Good=3, Excellent=4. From Tab. 8, all experts agree that the output from our model is the best, followed by InstructBLIP in second place, and MMGPT performs the worst. The observation is similar to that of GAVIE evaluation results. Although the ranking orders of MiniGPT4 and LLaVA from experts are not always the same as that of GAVIE, the scores assigned to them are fairly close. One possible reason is that the answers from MiniGPT4 and LLaVA tend to be longer, making them more challenging for humans to evaluate.
# A.1.2 Stability of GPT4-Assisted Visual Instruction Evaluation (GAVIE) | 2306.14565#58 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 58 | [55] T. Yu, R. Zhang, K. Yang, M. Yasunaga, D. Wang, Z. Li, J. Ma, I. Li, Q. Yao, S. Roman, Z. Zhang, and D. Radev. Spider: A large-scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911â3921, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1425. URL https://aclanthology.org/D18-1425. | 2306.14898#58 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 59 | # A.1.2 Stability of GPT4-Assisted Visual Instruction Evaluation (GAVIE)
This section investigates the stability of GAVIE. Precisely, we execute GAVIE 5 times on the model predictions. We leverage two metrics to measure the stability of GAVIE on each instance: Mean and Standard Deviation (STD). The average scores of the evaluation set are shown in the following table. From the perspective of the Mean, the ranking order of ACCURACY and RELEVANCY is the same as Tab. 8. As for the Standard Deviation in Tab. 9, it ranges from 0.65 to 2.46. From our observation, the ACCURACY and RELEVANCY scores of an instance may vary between different times, but they belong to the same grade level. Specifically, RELEVANCY has four grade levels: (1) The response is completely relevant (9-10), (2) The response is mostly relevant (6-8), (3) The response is partly relevant (3-5), (4) The response is seldom relevant (0-2). ACCURACY has four grade levels: (1) The response is completely accurate (9-10), (2) The response has minor errors (6-8), (3) The response is partly accurate (3-5), (4) The response is mostly or completely wrong (0-2).
13 | 2306.14565#59 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14565 | 60 | 13
Metric Ours MiniGPT4 InstructBLIP MMGPT mPLUG-Owl LLaVA ACCURACY(GPT4)-Mean 6.60 RELEVANCY(GPT4)-Mean 8.37 3.76 5.35 5.29 6.83 0.87 1.71 4.84 6.35 3.80 5.65 ACCURACY(GPT4)-STD 2.42 RELEVANCY(GPT4)-STD 1.30 2.46 1.99 2.42 1.88 0.65 0.81 1.96 1.48 2.37 2.18
Table 9: Evaluation of the stability of GAVIE. We run GAVIE 5 times on the randomly selected instances from the evaluation set. Mean and Standard Deviation(STD) are calculated to measure the stability. The metric scores of ACCURACY(GPT4) and RELEVANCY(GPT4) are from 0 to 10.
# A.2 More Experiments
# A.2.1 Do LMMs perform better on Positive or Negative Instructions? | 2306.14565#60 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 60 | [57] T. Yu, R. Zhang, M. Yasunaga, Y. C. Tan, X. V. Lin, S. Li, H. Er, I. Li, B. Pang, T. Chen, E. Ji, S. Dixit, D. Proctor, S. Shim, J. Kraft, V. Zhang, C. Xiong, R. Socher, and D. Radev. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511â4523, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1443. URL https: //aclanthology.org/P19-1443.
[58] L. Zeng, S. H. K. Parthasarathi, and D. Hakkani-Tur. N-best hypotheses reranking for text-to-sql systems, 2022.
[59] K. Zhang, Z. Li, J. Li, G. Li, and Z. Jin. Self-edit: Fault-aware code editor for code generation, 2023. | 2306.14898#60 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 61 | # A.2 More Experiments
# A.2.1 Do LMMs perform better on Positive or Negative Instructions?
Our evaluation set consists of positive and negative instances. We divide it into two sets and analyze the model performance on each. As shown in Fig. 8, baseline models, including MiniGPT4, LLaVa, and InstructBLIP, perform better on positive instances than negative ones, as the training data adopted by these models do not contain negative instructions. MMGPT performance poorly on both sets due to many repetitive phrases in the response. In addition, we found that the degradation of LLaVA is the most severe. We hypothesize that the synthetic answers for instruction tuning in LLaVA are generally longer and involve more unrelated information. In contrast, our model performs the best in both sets. InstructBLIP performs with higher scores than other LMMs because of the effectiveness of its instruction-aware visual encoder to extract image information.
# A.2.2 Do LMMs perform better on different formats and lengths of instructions? | 2306.14565#61 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 61 | [59] K. Zhang, Z. Li, J. Li, G. Li, and Z. Jin. Self-edit: Fault-aware code editor for code generation, 2023.
[60] S. Zhang, Z. Chen, Y. Shen, M. Ding, J. B. Tenenbaum, and C. Gan. Planning with large language models for code generation, 2023.
[61] T. Zhang, T. Yu, T. B. Hashimoto, M. Lewis, W. tau Yih, D. Fried, and S. I. Wang. Coder reviewer reranking for code generation, 2022.
[62] V. Zhong, C. Xiong, and R. Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning, 2017.
[63] S. Zhou, U. Alon, S. Agarwal, and G. Neubig. Codebertscore: Evaluating code generation with pretrained models of code, 2023.
13
# Appendix | 2306.14898#61 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 62 | # A.2.2 Do LMMs perform better on different formats and lengths of instructions?
From Tab 10, LMMs perform with higher scores on interrogative instructions than declarative, but the difference is relatively small. Even though recent visual instruction tuning datasets lack diverse declarative instructions, the LMMs built on LLM are powerful enough to understand and follow the declarative instructions. From Fig. 9, current LMMs achieve better results in short instructions than long ones since longer instructions contain more information, making it more challenging.
# A.3 Prompt Design
# A.3.1 Positive Instance Generation based on Visual Genome Dataset | 2306.14565#62 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 62 | 13
# Appendix
In this appendix, we provide additional details about the implementation and usage of the InterCode framework and the InterCodeEnv interface. We also provide visualizations and analyses of addi- tional experiments to demonstrate InterCodeâs utility and garner further insight into the extent of current modelsâ performance on the interactive coding task. The full template for each prompting strategy is also included. Finally, we also discuss some of the impacts, risks, and limitations of our work. The webpage for InterCode is https://intercode-benchmark.github.io/. The code for InterCode is https://github.com/princeton-nlp/intercode; the link is also included on the InterCode webpage.
# A Environment Details
# InterCode Interface
The InterCode interface inherits the OpenAI gym [5] environment API definition. Specifically, InterCodeEnv is written as an abstract class that primarily handles the main execution logic for processing code interactions, in addition to logging, data management, and sand-boxed execution, along with both environment-level and task-level customization.
InterCodeEnv exposes the following API. Creating an interactive coding environment requires defining a subclass of InterCodeEnv. The methods denoted with an asterisk can be overridden for the purposes of customization.
# __init__(self, data_path:
# str, image_name: str, **kwargs) | 2306.14898#62 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 63 | We show two full examples of our input prompts in (i) Fig. 11, 12, 13 and (ii) Fig. 14, 15, 16. In Fig. 11 and Fig. 14, we first present the images for the two examples, but they are not included in the text prompt for GPT4. As for the text input, we leverage the groundtruth bounding boxes and dense captions to represent the visual content as if GPT4 can see the image. After that, we randomly select 10 tasks from the 16 seeds and ask GPT4 to generate 20 instances for these tasks. Additionally, there can be more than one caption describing the same object with different attributes, such as "woman wearing a long dress" and "woman wearing a yellow dress" in Fig. 11. Although we present the bounding box coordinates of each caption to GPT4, it can be easily confused, treating them as two instances, one in a long dress and the other in a yellow dress. To mitigate this issue, we add "highly overlapping bounding boxes may refer to the same object" into the prompt to help GPT4 understand the "visual" input better. To enrich the instructions, we ask GPT4 to generate instances in both declarative and interrogative | 2306.14565#63 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 63 | # __init__(self, data_path:
# str, image_name: str, **kwargs)
str, image_name: str, **kwargs)
⢠Validates that the dataset specified by data_path is formatted correctly and can be used in an interactive setting.
⢠Uses the Docker image specified by image_name to create and connect with a Docker container instance of the image.
Initializes Logging Handler ⢠Keyword arguments:
â verbose (bool): If true, logging is enabled and environment interactions are shown to standard output
â traj_dir (str): If a valid path is provided, task episode summaries are saved to the given directory (generated by save_trajectory)
â preprocess (callable): If provided, this function is run before every task episode. It is a way to provide task instance-specific customization of the execution environment.
# reset(self, index:
# int = None) -> Tuple[str, Dict]
int = None) -> Tuple[str, Dict]
Retrieves task record from data loader ⢠Calls reset_container ⢠Reset task level logger, instance variables
# step(self, action:
# str) -> Tuple[str, int, bool, Dict]
str) -> Tuple[str, int, bool, Dict] | 2306.14898#63 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 64 | help GPT4 understand the "visual" input better. To enrich the instructions, we ask GPT4 to generate instances in both declarative and interrogative formats. We also explicitly instruct GPT4 with "The answers should be less than 30 words" as a requirement to reduce the chance of generating extra unrelated information in the training data. In order to make the output of GPT4 in a good format, we also ask GPT4 to generate an instruction, an answer, and a task name in order at the end of the prompt (Fig. 11 and Fig. 14). The full output of instructions and answers are shown in Fig. 12, 13 and Fig. 15, 16. We also present more positive instances with the output from different LMMs in Fig. 29, 30, 31. | 2306.14565#64 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 64 | # step(self, action:
# str) -> Tuple[str, int, bool, Dict]
str) -> Tuple[str, int, bool, Dict]
Log (action, observation) ⢠Invoke exec_action on action argument ⢠If action=submit, invoke get_reward, save_trajectory
# save_trajectory(self)
Saves task metadata, (action, obs.) sequence, and reward info to .json in traj_dir
close(self)
⢠Safely exit or stop any resources (i.e. docker container) used by the environment
execute_action(self, action:
# str)
Defines how the action is executed within the context of the docker container. ⢠Requires impl. because the Dockerfile definition, particularly its entrypoint, affects how an
action would be invoked within the container.
14 | 2306.14898#64 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14898 | 65 | action would be invoked within the container.
14
Interactive Loop data_path = ... iimage_name = ... env = BashEnv(...) 1. Provide docker image + dataset 2. Initialize new for # of episode: task episode env.reset() policy =... Intercode Env. __init__(data, img) reset() S called by reset_container() get_reward() done = false save_trajectory() execute_action() step(action) ya) 3. Interact with env. until task done while not done: called by policy(obs) > act env.step(act) » obs, done close() env.close ie) 4. Close, exit safely
Figure 5: Visualization demonstrating the intended invocations and usage of the InterCodeEnv inter- face, along with how the functions requiring implementation (get_reward(), execute_action(), reset_container() are called by the methods of the main interactive loop.
⢠Default impl. passes the action string directly into a self.container.exec(action) call, which invokes the action in the environment and returns execution output. A timeout is imposed on execution duration.
* get_reward(self) -> Tuple[float, Dict]
Handles reward calculation of actions with respect to the gold command(s) for a task episode. ⢠Requires impl. because the concept and scoring for task completion varies across datasets
and environments. | 2306.14898#65 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 66 | 14
Prompt: Here is the description of a chart âHere is a area Chart Is labeled Cumulative number of patients diagnosed with coronavirus (COVID-19) in Japan as of December 4, 2020, by place of infection. On the x-axis, Month is measured with a categorical scale starting with April and ending with October. There is 4 linear scale with a minimum of 0 and a maximum of 150,000 along the y-axis, labeled Patients within Japan" Please generate 3 different questions and answers pairs about title, x-axis, y-axis, data range or data pattern of the chart. The answers should come from the descriptions above. Each Answer must be less than 20 words. The output format should be as follows: Cumulative number of patients diagnosed, : ith coro! December 4 , 2020 , by place of Infection Iquestion1=> lanswer1 => Iquestion2=> lanswer2 => Iquestion3=> a lanswer3=> Month Patients within japan IGPT4 OUTPUT Example: [question T=> What does the area Chart represent? lanswer1 => Cumulative COVID-19 cases in Japan by place of infection from April to October 2020. Iquestion2=> What does the x-axis represent? lanswer2 => Months from April to October 2020 Iquestion3=> When did the greatest increase in COVID-19 cases in Japan occur? lanswer3=> Between November and December 2020. | 2306.14565#66 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 66 | Handles reward calculation of actions with respect to the gold command(s) for a task episode. ⢠Requires impl. because the concept and scoring for task completion varies across datasets
and environments.
reset_container(self)
Handles resetting of execution container (i.e. resetting file system to original state). ⢠Requires impl. because the approach to restoring a setting to its initial state varies.
Figure 5 conveys how each of these methods are invoked and how they related to one another. In summary, the technicalities for setting up an interactive coding task for a specific system with one or more programming languages as the action space involve:
Defining a Dockerfile ⢠Providing a dataset with the query and gold fields ⢠(Optional) Defining a reward (get_reward) function to define task completion. ⢠(Optional) Creating an InterCodeEnv subclass that overrides the execute_action and
get_reward methods
# A.2 Bash Environment | 2306.14898#66 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 67 | Figure 5: An example prompt for text-only GPT4 we use to generate instruction and answers for chart images. The sentence in BLUE is the captions of the chart.
# A.3.3 Negative Instance Generation - Nonexistent/Existent Object Manipulation
We show two full examples of our input prompts in (i) Fig. 17, 18 and (ii) Fig. 19, 20. In Fig. 17 and Fig. 19, we present the images to help readers understand dense captions better but they are not included in the text prompt for GPT4. We leverage the bounding boxes and dense captions as the "visual" input. As for Nonexistent object Manipulation in 17, we ask GPT4 to generate 6 instructions with nonexistent elements (nonexistent objects, nonexistent activities, nonexistent attributes, nonexistent interactions). As for Existent object Manipulation in 19, we ask GPT4 to generate 6 instructions of existing objects with wrong attributes. At the end of the text prompt, we ask GPT4 to generate an instruction and a reason to explain why the instruction is inconsistent with the image in order. The reason is regarded as the answer for the instruction in our training data. Fig. 18 and Fig. 20 show the full output from GPT4. We also present more negative instances with the output from different LMMs in Fig. 27, 28.
# A.3.4 Negative Instance Generation - Knowledge Manipulation | 2306.14565#67 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 67 | get_reward methods
# A.2 Bash Environment
Environment definition. The Dockerfile defining the Bash-based environment is founded on the LTS version of the Ubuntu operating system. Several Linux dependencies that can potentially be used by an agent to address instructions in the InterCode-Bash Dataset are then installed via the Advanced Package Tool (apt) interface. Next, a shell script is invoked within the Dockerfile to initialize one of the three file systems displayed in Figure 6. The shell script consists of a simple sequence of mkdir, touch, and echo commands to deterministically create and populate the content of multiple files and folders. Finally, git is configured for the purposes of determining file diffs per task episode (git status -s) and resetting an environment to its original state (git reset âhard; git clean -fd;) before the beginning of a new task episode. The original code for the Dockerfile along with the file system creation scripts can be found on the project GitHub repository.
Dataset details. The log-frequency distribution of the top-50 utilities is displayed in Figure 7. The NL2Bash [32] dataset is made available for use under the GPLv3 License. To assess the
15 | 2306.14898#67 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 68 | # A.3.4 Negative Instance Generation - Knowledge Manipulation
As for the Neg3: knowledge manipulation, we use GPT4 to manipulate the knowledge in the captions, including named entities and events.
# Prompt:
Please change the knowledge including keywords, name entities or event elements in the description âCumulative COVID-19 cases in Japan by place of infection from April to October 2020â [Output format should be as follows:
# lanswer=>
# IGPT4 OUTPUT Example:
Cumulative influenza cases in France by region of infection from March to October 2020.â
Figure 6: An example prompt for text-only GPT4 we use to generate negative instruction. The next step is to transfer the ouput into an interrogative sentence whose answer is "yes" or "no".
15
As shown in Fig. 6, GPT4 manipulates the "Japan", "COVID-19" and "April" in the original captions. After that, we instruct GPT4 to transfer the output sentence into an interrogative sentence whose answer is "yes" or "no". Finally, we combine "No." and the original answer as the final answer: Question: Did the image show the cumulative influenza cases in France by region of infection from March to October 2020? Answer: No. Cumulative COVID-19 cases in Japan by place of infection from April to October 2020". | 2306.14565#68 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 68 | 15
generalizability of our approach, we designed three distinct file systems to accommodate the bash commands we collected. A key consideration during the construction of these file systems was to ensure that a significant portion of the executed commands would not result in operations that yield no changes. This deliberate design choice aimed to provide a more comprehensive assessment of our approachâs adaptability and effectiveness across various scenarios and command executions. The file systems encompass a wide range of file types, including text files (.txt), program files (.c, .java, .py), compressed files (.gz), shell scripts (.sh), PHP scripts (.php), JSON files (.json), documents (.doc), spreadsheets (.csv), webpages (.html), database schemas (.sql), hidden files, and files with special characters in their names, convoluted folder hierarchies. Their directory structures are illustrated in Figure 6. For simplicity, we consider the top-level folder created within the root directory (testbed, system, workspace) as the root of each file system. This root folder contains files and sub-folders that necessitate access and manipulation, while changes are monitored throughout the entire container to accurately evaluate the modelsâ actions. Notably, we intentionally designed file system 1 to be more intricate and encompass relatively challenging bash tasks compared to the other two file systems. Thereby, the modelsâ performance is relatively lower for file system 1. | 2306.14898#68 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 69 | # A.3.5 Prompt Design for Evaluating Knowledge Hallucination
As for the knowledge level hallucination, we will use the groundtruth answers as a reference and compare them with predictions of models. A prompt example for GPT4 is shown in Fig. 7:
# Prompt:
Imagine you are an intelligent teacher. Thoroughly read both the reference answer and the prediction lanswer to ensure a clear understanding of the information provided. Assess the accuracy of the predictions, and note that a prediction answer will be considered incorrect if it contradicts the reference lanswer.
(Question: What does the volume of U.S. imports of trade goods in 2009 experienced , increase, decrease or steady?
Reference answer: The volume of U.S. imports of trade goods in 2009 experienced a significant decrease, with a drop between -0.2 and -0.3.
Prediction answer: The volume of U.S. imports of trade goods experienced a decrease
# IGPT4 OUTPUT Example:
[The prediction answer states that "The volume of U.S. imports of trade goods experienced a decrease". This matches with the reference answer which says "The volume of U.S. imports of trade goods in 2009 experienced a significant decrease". Therefore, the prediction answer is correct.
Figure 7: An example prompt for text-only GPT4 we use to evaluate knowledge manipulation instruction. The sentences in BLUE are the questions, reference answers, and predictions of models. . | 2306.14565#69 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 69 | Reward function. Evaluation of an agentâs trajectory across a single task episode towards carrying out the given instruction is determined by modifications to the file system and the latest execution output. The instructions found in the InterCode-Bash dataset fall under one of two buckets: it either 1. Requests information about the file system that can be answered via execution output generated from a correct sequence of Bash actions (i.e. "How many files...", "What is the size of...", "Where is the .png image stored?") or 2. Requests a change to the location, configuration, or content of a file or folder (i.e. "Move the dir1 folder from...", "Set the permissions to...", "Append a line to..."). Any relevant correct changes are therefore captured by considering both execution output and file system modifications during evaluation.
We define A and G as the outputs of the agent and gold commands respectively, where Aout and Gout refer to the execution output, and Af s and Gf s refer to a list of entries reflecting file system modifications, where each entry is [file path, modification type â [added, changed, deleted]]. We then formally define the reward function as follows: | 2306.14898#69 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 70 | Figure 7: An example prompt for text-only GPT4 we use to evaluate knowledge manipulation instruction. The sentences in BLUE are the questions, reference answers, and predictions of models. .
Categories Metric Ours MiniGPT4 LLaVA InstructBLIP MMGPT Interrogative Interrogative ACCURACY(GPT4) RELEVANCY(GPT4) 6.61 8.46 4.14 6.20 4.60 5.88 5.95 7.67 1.01 2.00 Declarative Declarative ACCURACY(GPT4) RELEVANCY(GPT4) 6.50 8.21 3.98 5.39 3.82 5.84 5.47 6.64 0.90 1.62
Table 10: Evaluation results on Interrogative Instructions and Declarative Instructions by GAVIE. The metric scores of ACCURACY(GPT4) and RELEVANCY(GPT4) are in a scale of 0 to 10.
# A.4 More Dataset Statistic
I summarized the popular words in the knowledge manipulation generated by GPT4 in Fig. 10 and found they mainly include six categories: event, number, date, persons, place, and others. Some examples are shown below. | 2306.14565#70 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 70 | R = 0.34 â similarity(Aout, Gout) +0.33 â (1 â erf(|Af s ⪠Gf s â Af s â© Gf s|))+ +0.33 â is_correct(Af s â© Gf s) Af s â© Gf s (1)
Where similarity refers to lexical similarity, which is determined by the cosine similarity score between TF-IDF vectors (calculated with TfidfVectorizer from scikit-learn) of the two execution outputs. The second component of the reward function reflects the number of file system modifications that were either not completed or not necessary; the error associated with the total number of misses is constrained to the range [0,1] using the Gauss error function (erf), where 0 corresponds to no file system modification mistakes. The third component checks what proportion of paths altered by both agent and gold were modified correctly. The is_correct function returns the number of file paths that were changed correctly, determined by checking whether the md5sum hashes of each file path are identical for agent and gold. If Af s â© Gf s = â
, this reward is automatically 1. The scalar weights for each component are arbitrarily assigned. | 2306.14898#70 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 71 | Canada, increase, decrease, lowest, 2009, United States, 2016, employment, unemployment, higher, 2013, 2017, 2015, drop, minimum, worst, consistent, kingdom, x-axis, y-axis, under, Italy, pie, bar...
16
Accuracy(GPT4) be ee ee ee Lda mPositive mNegative
Relevancy(GPT4) Ours MiniGPT4 Lava InstructBLIP MMGPT mPositive m Negative
(a) Accuracy Performance. (b) Relevancy Performance.
Figure 8: Evaluation results on positive and negative instructions by GAVIE.
Relevancy(GPT4) 9 8 7 6 5 4 2 : Hf ours MaigPT4 LLava InstructBLIP | MMGPT mlength>12 mLength<12
Accuracy(GPT4) a 9 6 8 7 5 6 4 5 3 4 2 1 _ : Ours MaiGPTa lava InstructBLIP =| MMGPT MLength>12 mLength<12
(a) Accuracy Performance. (b) Relevancy Performance.
Figure 9: Evaluation results on different instruction lengths by GAVIE.
Existence Count Position Color Posters Celebrity Scene Landmark Artwork OCR 68.33 115.0 120.00 60.50 57.50 77.50 80.00 96.25 65.00 101.25 110.0 | 2306.14565#71 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 71 | A max score of 1 is achieved only if the correct file paths are changed, the changes are correct, and the latest execution output matches the gold command output exactly. Figure 1 visualizes the reward function. While an exact match comparison would have been a simpler choice to satisfy the Success Rate metric put forth in the main paper, we design this reward function to 1. Demonstrate that InterCode can support complex reward functions that account for multiple forms of execution output, and 2. Provide practitioners who use the InterCode-Bash environment with a scalar reward that reflects how "similar" the given output is to the expected, rather than a flat 0/1 reward value that may over-penalize and discount the efforts of more capable reasoning abilities. These reasons also motivate the SQL-based environmentâs reward function, discussed in the following section.
16 | 2306.14898#71 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
2306.14565 | 72 | Table 11: Completed experiments of Perception on MME [9] benchmark.
Cognition Original MiniGPT4 Finetuned MiniGPT4 Original mPLUG-Owl Finetuned mPLUG-Owl 59.29 76.42 78.57 100.71 45.00 55.00 60.00 70.00 0.00 77.50 80.00 85.00 40.00 67.50 57.50 72.50
# Commonsense Reasoning Numerical Calculation Text Translation Code Reasoning
Table 12: Completed experiments of Cognition on MME [9] benchmark.
17
person number
Figure 10: Distribution of Knowledge Manipulations. The knowledge mainly includes six categories: event, number, date, persons, place, and others.
18 | 2306.14565#72 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | [
{
"id": "2307.05052"
},
{
"id": "2302.13971"
},
{
"id": "2307.05356"
},
{
"id": "2306.14565"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.10355"
},
{
"id": "2212.00280"
},
{
"id": "2305.04790"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "1809.02156"
},
{
"id": "2306.06306"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2303.18223"
},
{
"id": "2010.03743"
},
{
"id": "2303.16634"
},
{
"id": "2212.10560"
},
{
"id": "2302.04023"
},
{
"id": "1908.03557"
},
{
"id": "2305.03726"
},
{
"id": "1907.11692"
},
{
"id": "2103.11943"
},
{
"id": "2303.15056"
},
{
"id": "2305.06500"
}
] |
2306.14898 | 72 | FooBar.html Hello.java Hello1.java NewClass.java dir1 AnotherHello.java info.php subdir1 jsonfile1.json pythonscript4.py shellscript1.sh subsubdir1 pythonscript1.py shellscript4.sh textfile4.txt subdir2 textfile1.txt .DS_Store MANIFEST a.out folder1 a.out data.csv doc1.doc doc2.doc keep.txt log1.log new.sh old2.txt recent.txt script1.sh text2.txt text3.txt text4.txt dir2 folder2 shellscript2.sh subdir1 javafile1.java textfile2.txt subdir2 pythonscript2.py shellscript5.sh subsubdir1 textfile5.txt dir3 subdir1 special text3.txt special_text1.txt special_text2.txt text1.txt folder2.tar.gz folder3 backup_dbg backup sql1.sql text1_dbg.txt pythonscript3.py subsubdir1 FooBar special text4.txt temp file.txt file.txt shellscript3.sh textfile3.txt tmp empty.txt temp1 temp temp_1 text1.txt subdir2 tmp.txt html1.html temp csvfile1.csv | 2306.14898#72 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | [
{
"id": "2304.05128"
},
{
"id": "2207.10397"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.