doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.15626
129
: b + c = ¢ + b’ using the “add_comm™ lemma, and then use this equality to rewrite the goal. Let's try this approach. Used LeanDoj & LOCALHOST REQUEST TO LEANDOJO id”: "have h : b +c = c + b := add_comm bc, zw h" RESPONSE FROM LEANDOJO "{'error': ‘gen ca ct : (1, 42)) msg="rewrite tactic failed, did not find instance of the pattern in the target expression\n b + c" tactic_state="a bc: N,\nh: b+c=c+#b\n- at+b+ ateo+b", ‘proof finished': False}" It looks like the *xw* tactic still failed to apply the intermediate equality *h : b +c =c + b* tothe goal “a + b + ¢ = a + ¢ + b’. The error message indicates that the tactic did not find an instance of the pattern *b + ¢” inthe target expression.
2306.15626#129
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
130
Figure H: (ChatGPT-3.5, 4/8) ChatGPT made another two failed attempts. Here, the second attempt had the same problem as in Fig. E (“+” is left associative). In addition, one can also switch from DPR to radically different retrieval architectures. For example, generative retrieval [107–109] is a recent class of models performing retrieval by directly predicting the document IDs, which could be the premise names in our task. Limitations of Imitating Human-Written Proofs. Human-written proofs extracted by LeanDojo provide valuable data for training the prover. However, we have also observed limitations of using them as the sole training target: First, they are relatively scarce for today’s data-hungry LLMs. LeanDojo Benchmark has 98,734 proofs, covering a large portion of available data in Lean (as of October 2023). The number of proofs 34
2306.15626#130
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
131
To address this, we can use the “simp” tactic, which is a more powerful tactic for simplifying expressions. The ~simp~ tactic can automatically apply the commutativity of addition to simplify the goal. Let's try using the *simp* tactic to prove the goal. Used LeanDojo REQUEST TO LEANDOJO { "state_id": 0, "tactic": "simp" 3 RESPONSE FROM LEANDOJO "C' erro’ jen_tac_and_capture_res_failed: pi msg="simplify tactic failed to simplify" tactic_state="a bc : N\nk a+b + atc+b"', ‘proof_finished': False}" It seems that the *simp* tactic also failed to simplify the goal "a + b +c =a +c +b”. The error message indicates that the tactic failed to simplify the expression. Given the limitations of the available tactics, let's try a different approach. We can use the zw’ tactic to apply the commutativity of addition to the subterm “a + b* inthe goal. This will allow us to rewrite the goalas “b + a + c = a
2306.15626#131
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
132
the commutativity of addition to the subterm “a + b* inthe goal. This will allow us to rewrite the goalas “b + a + c = a + c + b’.Let'stry this approach. Used LeanDoj REQUEST TO LEANDOJO "xw add_comm a b" RESPONSE FROM LEANDOJO "{'state_id': 1, 'state': 'abc:N\n- b+a+c=at+c+b', 'proof finished’: False}" After applying the commutativity of addition to the subterm “a + b*, we have successfully rewritten the goal as "b + a + c = a + ¢ + b*. The updated goalis: lean ©) Copy code abc: C€ tbtatec=ateo+b
2306.15626#132
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
133
Figure I: (ChatGPT-3.5, 5/8) ChatGPT tried to simplify the goal using the simp tactic but failed. Then it tried the “rw add_comm a b” tactic, which successfully rewrote the goal from “a + b + c = a + c + b” to “b + a + c = a + c + b”. in other proof assistants has the same order of magnitude (tens or hundreds of thousands). Due to limited data, we cannot constantly improve the performance simply by scaling up the model size. Second, theorem proving in proof assistants is an interactive process, but the proof only captures the final successful trajectory. Without the intermediate history of trial and error, it can be quite opaque how final proofs are derived. Therefore, tactics in human-written proofs can be difficult for the model to learn from. Third, models trained on proofs in one project often struggle to generalize to theorems in new domains [9, 20], e.g., from mathlib to MiniF2F and ProofNet (Appendix C.4). 35
2306.15626#133
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
134
35 Now, we can use the *rw* tactic again to apply the commutativity of addition to the subterm “a + c° inthe goal. This will allow us to rewrite the goalas *b + a + ¢ = c + a + b’. Let's try this approach. Used LeanDojo 4¥ LOCALHOST REQUEST TO LEANDOJO- RESPONSE FROM LEANDOJO :2, ‘state’: 'abc: N\n) b+at+c=c+a+b!, 'proof_finished': After applying the commutativity of addition to the subterm ~a + c*, we have successfully rewritten the goal as ~b + a + c = ¢ + a + b~. The updated goal is: lean ©) Copy code abc: C€ Fbtatc=ctatb Figure J: (ChatGPT-3.5, 6/8) Then it successfully performed another rewrite from “b + a + c = a + c + b” to “b + a + c = c + a + b”.
2306.15626#134
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
135
To overcome these limitations, existing work has explored learning from auxiliary data or data collected via online interaction with the proof assistant. For example, Proof Artifact Co-Training (PACT) co-trains the tactic generator on nine auxiliary tasks, such as predicting types and theorem names [16]. MetaGen [38] trains a neural network to generate synthetic theorems/proofs as training data in the Metamath proof assistant [110]. Polu et al. [19] and Lample et al. [17] improve the prover by training it on successful proofs found by itself. Incorporating these techniques into our system may lead to substantial improvements. 36
2306.15626#135
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
136
36 Now, we can use the “xw~ tactic one more time to apply the commutativity of addition to the subterm “b + a” inthe goal. This will allow us to rewrite the goalas"a+b+c=ct+at b*, which matches the right-hand side of the original goal. Let's try this approach. & LOCALHOST REQUEST TO LEANDOJO- "state_id": 2, "tactic": "xw add_comm b a" By RESPONSE FROM LEANDOJO "{'state_id': 3, 'state': ‘abc: N\nk a+b+c=c+a+b', 'proof_finished': False}" After applying the commutativity of addition to the subterm “b + a, we have successfully rewritten the goalas “a + b + c = ¢ + a + b*. The updated goal is: ©) Copy code Figure K: (ChatGPT-3.5, 7/8) Another rewrite tactic, transforming the goal to “a + b + c = c + a + b”.
2306.15626#136
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
137
Finally, we can use the *rw” tactic one last time to apply the commutativity of addition to the subterm “c + a” inthe goal. This will allow us to rewrite the goalas°a+b+c=a+ect b*, which matches the original goal and completes the proof. Let's finish the proof. UsedLeanDojo 4 LOCALHOST REQUEST TO LEANDOJO- { "state_id "tactic": "xw add_comm c a’ d RESPONSE FROM LEANDOJO "{'state_id': 4, 'state': ‘abc: N\nk a+b+c=a+c+b', 'proof_finished': False} It looks like the goal “a + b + ¢ = a + c + b* remains unchanged after applying the commutativity of addition to the subterm “e + a°. This is because the goal is already in the desired form, and the proof is complete. Since the goal matches the original statement of the theorem, we have successfully proved the theorem “hello_world*. The proof demonstrates that for three arbitrary complex numbers *a*, *b",and *c*,thesum “a + b + c” isequaltothesum “a + c + b*™, which confirms the commutativity of addition for complex numbers.
2306.15626#137
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
139
| want you to prove a theorem in Lean. The theorem's name is “hello_world *, and it is defined in the file ‘src/example.lean * in *https://github.com/yangky11/lean-example *. Used LeanDojo 4% LOCALH REQUEST TO LEANDOJO. m_name": "hel. file_path": " le. lean" FROM LEAN The theorem *hello_world’ is stated as follows: For any three integers *a*, “b*,and *c*,the sum “a + b + c* isequaltothesum ‘a + c + b’. In other words, it asserts the commutativity of addition for integers, which is a we wn property of integers. Let's start the proof. The first step is to apply the “zw” (rewrite) tactic with the “add_assoc” lemma, which states that “(a + b) + ¢ = a + (b + c)°. Thiswill allow us to rearrange the terms in the sum. Used LeanDojo J LOCA\ REQUEST TO LEANDOJO. "xw add ONSE FROM LEANDOJO "C'state_id': 1, ‘state’: 'a
2306.15626#139
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
141
Figure M: (ChatGPT-4, 1/3) The same theorem using the GPT-4 version of ChatGPT. Despite a minor error when interpreting the theorem (natural numbers, not integers), it generated a sensible tactic, rewriting the goal to “a + (b + c) = a + c + b”. 38 The next step is to apply the *xw’ (rewrite) tactic with the *add_comm* lemma, which states that *b + ¢ = ¢ + b’. This will allow us to swap the terms *b* and “c* inthe sum. Used LeanDojo 4% LOCALHOST REQUEST TO LEANDOJO proof_fi After applying the *add_comm* lemma, we have the new goal. For any three integers *a’, “b*,and “c*,thesum ‘a + (c + b)* is equal tothe +b. Figure N: (ChatGPT-4, 2/3) A good move, rewriting the goal to “a + (c + b) = a + c + b”. 39
2306.15626#141
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.15626
142
Now, we can see that the left-hand side and the right-hand side of the equation are identical. Therefore, we can finish the proof by applying the *ref1° tactic, which asserts that any mathematical object is equal to itself. UsedLeanDojo JY LOCALHOST QUEST TO LEANDOJO It seems there was an error when trying to apply the *ref1* tactic. This error occurred because the Lean prover expected the left-hand side and the right-hand side of the equation to be syntactically identical, but they are not due to the parentheses. To resolve this, we can apply the *add_assoc” lemma again to remove the parentheses on the left-hand side. This will make the left-hand side and the right-hand side syntactically identical, allowing us to apply the *ref1* tactic. UsedLeanDojo 4% LOCALHOST REQUEST TO LEANDOJO. RESPONSE FROM LEANDOJO ('proof_finished': Tx After applying the ~add_assoc® lemma again to remove the parentheses, we have: For any three integers *a*, *b*,and “e*,the sum ‘a + ¢ + b* isequaltothesum ‘a + ¢ +
2306.15626#142
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection: a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research.
http://arxiv.org/pdf/2306.15626
Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar
cs.LG, cs.AI, cs.LO, stat.ML
Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral presentation. Data, code, and models available at https://leandojo.org/
null
cs.LG
20230627
20231027
[ { "id": "2302.13971" }, { "id": "2302.12433" }, { "id": "2302.04761" }, { "id": "2303.12570" }, { "id": "2303.04488" }, { "id": "2205.15231" }, { "id": "1505.04324" }, { "id": "2305.10601" }, { "id": "2303.17568" }, { "id": "2206.01962" }, { "id": "2107.03374" }, { "id": "2009.03393" }, { "id": "2303.08774" }, { "id": "2301.02195" }, { "id": "2203.13474" }, { "id": "2212.10007" }, { "id": "2305.07766" }, { "id": "2208.03299" }, { "id": "2303.04910" }, { "id": "2305.06161" }, { "id": "2305.11841" }, { "id": "2206.12839" }, { "id": "1606.01540" }, { "id": "2305.16366" }, { "id": "2212.10535" }, { "id": "2303.04864" }, { "id": "1701.06972" }, { "id": "2304.10486" }, { "id": "2305.07185" }, { "id": "1905.10501" } ]
2306.14565
0
3 2 0 2 p e S 9 2 ] V C . s c [ 3 v 5 6 5 4 1 . 6 0 3 2 : v i X r a # Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning Fuxiao Liu1, Kevin Lin2, Linjie Li2, Jianfeng Wang2, Yaser Yacoob1, Lijuan Wang2 1University of Maryland, College Park 2Microsoft Corporation {fl3es, yaser}@umd.edu, {keli, lindsey.li, jianfw, lijuanw}@microsoft.com https://fuxiaoliu.github.io/LRV/ # Abstract
2306.14565#0
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14565
1
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open- ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instruc- tions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate
2306.14565#1
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
1
We introduce KOSMOS-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., “[text span](bounding boxes)”, where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GRIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), KOSMOS-2 integrates the grounding capability into downstream applications. We evaluate KOSMOS-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big
2306.14824#1
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
1
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash [32], Spider [55], and MBPP [4] datasets. We demonstrate InterCode’s viability as a testbed by evaluating multiple state-of-the-art
2306.14898#1
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
2
experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
2306.14565#2
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14898
2
Spider [55], and MBPP [4] datasets. We demonstrate InterCode’s viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct [51] and Plan & Solve [43]. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. *
2306.14898#2
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
3
# Introduction Significant progress has been made in the field of natural language processing, leading to the development of models that can comprehend and follow instructions given natural language inputs [40; 11; 30; 5]. These models harness the power of large language models (LLM) and rely on high-quality instruction data. Similarly, efforts have been made to introduce similar capabilities to multi-modal models. GPT4 [29] has demonstrated impressive performance in multi-modal conversations with humans, yet the techniques contributing to its extraordinary capabilities remain opaque. As a result, several large multi-modal models (LMMs) have recently emerged [44; 26; 11; 8], such as MiniGPT4 [44] and LLaVA [26], both utilize the Vicuna [7] as the language generator but with different vision encoders [31; 17]. InstructBLIP [8] is initialized from a pre-trained BLIP-2 [16] while Multimodal-GPT (MMGPT) [11] is built on Flamingo [1; 3]. A recent study [15] revealed that the hallucination issue of LLM, although not desired, is inherited by these LMMs [44; 26; 11; 8]. Hallucination, a major ethical concern associated with LLMs [4], can Preprint. Under review.
2306.14565#3
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
3
“ Grounding [a campfire](<loc,> <loc4997>) Kosmos-2: Multimodal Large Language Model [It](<loc,,> <locgg3>) sits next to Referring Figure 1: KOSMOS-2 is a multimodal large language model that has new capabilities of multimodal grounding and referring. KOSMOS-2 can understand multimodal input, follow instructions, perceive object descriptions (e.g., bounding boxes), and ground language to the visual world. # ∗ Equal contribution. † Corresponding author.
2306.14824#3
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
3
# Introduction The art of computer programming is naturally an interactive process. When a human programmer writes code, she relies on several iterations of a ‘write-execute-test’ loop in order to iteratively refine solutions, plan changes, test sub-modules, and solve ambiguities by checking execution behavior. While this is reminiscent of other human endeavors like writing, code compilation and execution produce exact results that provide a deterministic form of feedback to make the refinement process more straightforward. Depending on the observed results, programmers perform various levels of debugging and rewriting, and continue the process until their code satisfies the requirements. There has been increasing interest in recent years around the development of models that can automatically generate code given a specification in natural language [18, 46, 14, 29, 25]. Powered by large-scale pre-training over thousands of codebases [2, 22, 19], these models have shown solid performance on static benchmarks like HumanEval [9], APPS [20], MBPP [4], CodeXGLUE [33]. However, generating code in a static, sequence-to-sequence or auto-regressive fashion has several drawbacks: 1) simple errors (even typos) can propagate and there is no chance for recovery or Code and data available at https://intercode-benchmark.github.io/ 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks.
2306.14898#3
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
4
GAVIE Relevancy Score(0-10) i] Accuracy Score(0-10) 1 Score (mPLUG-Owl): 10 Score (mPLUG- Owl). Reason: It followed the instruction and |Reason: directly described a dog playing with a ball on the shore, which was the exact scenario asked for in the instruction. Score (MiniGPT4): 1 Score (MiniGPT4): 10 Reason: It provided an overview of the |Reason: It provides a detailed image but didn't directly address the _| description of the image, such as the instruction about finding adog anda _ people surfing, the ocean, and the wet ball. suits. # § S| Can you help me find a dog playing with a rs ® ball on the shore? Ss Score (Ours): 10 Score (Ours): 10 Reason: The answer is in line with the Reason: The answer directly responds to|image information, pointing out the lack’ the question. of mentioned elements from the ‘a playful element to the otherwise serene question in the image descriptions. ‘beach setting, where people are enjoying a in I e There is no mention of a dog or a balll in —. — 8 the given image. Introducing an element = not in the image may
2306.14565#4
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
4
# ∗ Equal contribution. † Corresponding author. Input Prompt Question: How many cows are Question: What does the the left eye of emoji here? Answer: sign say? Answer: The sign says "Welcome to two cows present in the image. Carnaby Street." < & g a € 8 Ps a f= & a 2 = a & oe we un . Question: What is the biggest Question: Why is this animal Question: What is it? Answer: difference betwen bottle-1 and . . bottle-2? Answer: c This animal is unique because it is a turtle. Turtles are known for their slow b hed . . 2 and steady pace, whereas rabbits are a oh, ated ed to as biggest ference between — known to be agile and quick. It is not the boat. ottle-1 and bottle-2 is the label. ° common for a turtle to be faster than a rabbit, especially in terms of speed. (4) (5) (6) A snowman sits next to a campfire in . the snow. He is wearing a hat, scarf, = & andmittens. There are several pots 2 & nearby, likely containing a hot meal S £ — for the snowman. The scene is 2 O serene and peaceful, with the snowman looking at the fire with a warm smile. Describe this image in detail: (7)
2306.14824#4
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
4
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks. ~» Query (Bash) “Move all images from my Desktop and Downloads to a compressed folder on > my USB drive.” Question | Find all text files in the testbed directory and write ODD | their names to a single file called concated.txt. Agent wd | & P Bash Env / Agent Ils | & Bash Env testbed dirl dir2 ... P Agent real] find /testbed -type f -name ‘*.txt’... Bash Env /testbed/dir3/textfiles.txt /testbed... IP Agent / Gold Command real] submit | e find /testbed -type f -name '*.txt! - exec cat {} ; > /testbed/concated.txt © InterCode Env Agent Command(s) 1. cd testbed 2. find . -type f -name; 3. touch concated.txt 4. echo hi.txt >! concated.txt (additi ) : Z Answer (® Env. State file1.txt concated.txt file2.txt Y 11. Compare File System: . Check if common edits are identical Agent Latest x Standard Output are ‘Calculate Lexical! Task Gold Standard Output A concated.txt —7
2306.14898#4
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14824
5
Figure 2: Selected examples generated from KOSMOS-2. The examples include (1) visual grounding, (2)-(3) grounded question answering, (4)-(6) multimodal referring via bounding boxes, and (7) grounded image captioning. 2 # 1 Introduction Multimodal Large Language Models (MLLMs) [HSD+22, ADL+22, HDW+23, DXS+23, Ope23] have successfully played a role as a general-purpose interface across a wide range of tasks, such as language, vision, and vision-language tasks. MLLMs can perceive general modalities, including texts, images, and audio, and generate responses using free-form texts under zero-shot and few-shot settings.
2306.14824#5
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
5
Figure 1: Overview of InterCode. Setting up an interactive code environment with InterCode requires a Dockerfile, dataset, reward function definition, and a small amount of subclass implementation. The interactive loop between agent and environment closely mirrors real world software development processes. While InterCode task performance is generally quantified as a binary 0/1 completion score, InterCode allows for the design of more complex evaluation criteria that can incorporate execution output and the effects of interaction on the state space. revision, 2) there is a disconnect between the code generation process and its downstream execution on the desired software and hardware environment, and 3) there is little room for human intervention or collaboration in the code generation process. Recently, some works have proposed the use of execution feedback or interaction [47] to benefit code generation models [24, 21, 48, 20]. However, these papers consider their own individual setup and are difficult to compare with one other due to the use of different compilers, execution environments, feedback signals, and assumptions on the interactive process such as human participation to create task descriptions or provide natural language feedback. This makes it difficult to compare existing methods for code generation and to clearly understand the benefits of interactive generation.
2306.14898#5
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
6
Figure 1: Given an image and human instruction as the input, we introduce GPT4-Assisted Visual Instruction Evaluation (GAVIE) to assess the output from current LMMs, such as MiniGPT4 and mPLUG-Owl. BLUE represents LMMs can not accurately follow human instructions while RED means they suffer from the hallucination problem. After finetuning current LMMs on our proposed LRV-Instruction dataset, we can generate a more robust answer. lead to harmful consequences, especially when users without adequate domain knowledge over-rely on these increasingly convincing language models. In the context of LMM hallucinations, the model can generate descriptions with conflicting information to the given image. For instance, as shown in Fig. 1 (highlighted in red), existing LMMs [44; 26; 8; 11] tend to describe nonexistent objects such as a "dog" engaging in a nonexisting activity like "playing with a ball". Additionally, the model may generate long image descriptions without following human instructions (highlighted in blue).
2306.14565#6
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
6
In this work, we unlock the grounding capability for multimodal large language models. Grounding capability can provide a more convenient and efficient human-AI interaction for vision-language tasks. It enables the user to point to the object or region in the image directly rather than input detailed text descriptions to refer to it, the model can understand that image region with its spatial locations. Grounding capability also enables the model to respond with visual answers (i.e., bounding boxes), which can support more vision-language tasks such as referring expression comprehension. Visual answers are more accurate and resolve the coreference ambiguity compared with text-only responses. In addition, grounding capability can link noun phrases and referring expressions in the generated free-form text response to the image regions, providing more accurate, informational, and comprehensive answers.
2306.14824#6
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
6
To address these issues, we propose InterCode, the first standard coding benchmark designed natively with an interactive execution environment. Closely mimicking the human decision-making process, InterCode allows a coding agent to interactively receive feedback from compilers/interpreters that execute its code, and to submit further refinements. We design InterCode to be like a standard reinforcement learning (RL) environment that requires minimal human intervention and one in which generated code is treated as actions, which are executed to reveal observations. Our framework is (1) language and platform agnostic and can easily be used for new coding problems, (2) uses self-contained Docker environments to provide safe execution, and (3) compatible out-of-the-box with traditional seq2seq generation methods, while also enabling and empowering the development of new interactive techniques. We demonstrate the power of the framework by implementing Bash, SQL, and Python tasks within InterCode, building on pre-existing static datasets [62, 32, 4]. We perform experiments across diverse models and prompting methods, including ReAct [51] and Plan & Solve [43]. Our findings concretely showcase the benefits of interaction towards solving coding tasks, discuss the distribution of distinct code understanding challenges across different task settings, and explore the ease with which new tasks and datasets can be defined using InterCode. 2 To summarize, our paper makes the following contributions:
2306.14898#6
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
7
What are the likely causes of these hallucinations? As current LMMs are built on strong LLMs, they may over-rely on language priors and generate words more likely to go together with the instruction text regardless of the image content. What’s more, LMMs, such as MiniGPT4 [44] and LLaVA [26], employ synthetic instruction data for training, which are generally long and involve nonexistent objects, activities, or relationships in the image. Why can’t LMMs accurately follow human instructions? We conjecture it is due to the lack of diversity in their training data. For example, MiniGPT4 [44] is only instruction tuning with four instruction templates designed for image captioning tasks. Though MMGPT [11] and InstructBLIP [8] combine several datasets as the instruction tuning data, their instructions and answers are still based on a few templates.
2306.14565#7
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
7
We introduce KOSMOS-2, a multimodal large language model with grounding capability built upon KOSMOS-1. KOSMOS-2 is a Transformer-based causal language model and is trained using the next-word prediction task. In order to unlock the grounding capability, we construct a web-scale dataset of grounded image-text pairs, and combine it with the multimodal corpora in KOSMOS-1 to train the model. The grounded image-text pairs are built upon a subset of image-text pairs from LAION-2B [SBV+22] and COYO-700M [BPK+22]. We construct a pipeline to extract and link the text spans (i.e., noun phrases and referring expressions) in the caption to the spatial locations (e.g., bounding boxes) of its corresponding objects or regions in the image. We convert the spatial coordinates of the bounding boxes to a sequence of location tokens, which is then appended after its respective text spans. The data format serves as a “hyperlink” to connect the objects or regions of the image to the caption.
2306.14824#7
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
7
2 To summarize, our paper makes the following contributions: • We develop InterCode, a new, universal framework for interactive code generation, which provides ease of use, extensibility, and safety. • Using InterCode, we perform a comprehensive evaluation of state-of-the-art models and identify several avenues for improvements. • We release our framework as a new benchmark along with useful empirical tools to customize any new static code datasets into interactive tasks. # 2 Related Work
2306.14898#7
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
8
To address these challenges, we present LRV-Instruction, a large and diverse visual instruction benchmark. Our benchmark consists of 400k visual instructions generated by GPT4, taking inspiration from the success of recent GPT models in text-annotation tasks [27]. Unlike previous studies that focused on limited tasks and pre-defined templates created by human experts [44; 8; 11], LRV- Instruction covers 16 vision-language tasks with open-ended instructions and answers, as shown in Fig. 2 and Fig. 4. As observed by [19], current LMMs tend to answer "Yes" for any instructions presented to the model, even when the proper answer should be "No". Our investigation reveals that most LMMs are finetuned on unbalanced datasets containing only positive instructions (Tab. 1). To enable LMMs to respond to human instructions more faithfully, we design LRV-Instruction to include both negative and positive instructions for robust instruction tuning. Our negative instructions are generated at three semantic levels (Fig. 2): (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation in two different formats, Declarative and Interrogative. To improve
2306.14565#8
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
8
Experimental results demonstrate that KOSMOS-2 not only achieves competitive performance on lan- guage and vision-language tasks evaluated in KOSMOS-1, but also achieves impressive performance on grounding tasks (phrase grounding and referring expression comprehension) and referring tasks (referring expression generation). As shown in Figure 2, integrating the grounding capability enables KOSMOS-2 to be used for more downstream tasks, such as grounded image captioning, and grounded visual question answering. # 2 Construction of Web-Scale Grounded Image-Text Pairs (GRIT) We introduce GRIT2, a large-scale dataset of Grounded Image-Text pairs, which is created based on image-text pairs from a subset of COYO-700M [BPK+22] and LAION-2B [SBV+22]). We construct a pipeline to extract and link text spans (i.e., noun phrases and referring expressions) in the caption to their corresponding image regions. The pipeline mainly consists of two steps: generating noun- chunk-bounding-box pairs and producing referring-expression-bounding-box pairs. We describe these steps in detail below:
2306.14824#8
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
8
Interactive environments for coding. Most coding benchmarks (e.g. SQL - Spider [55], KaggleD- BQA [26]; Bash - NLC2CMD [1], NL2Bash [32]; Python - HumanEval [9], APPS [20], MBPP [4], CodeXGLUE [33], CodeNet [38]) frame the coding problem as a sequence transduction problem (from instruction to code), rather than an interactive decision making problem with an execution envi- ronment. Attempts have been made to simulate interaction by developing conversational, dialogue- style [57, 56], multi-step problem solving [36] datasets, which involve pre-annotated human-designed queries. The work closest to InterCode has been recent explorations of Python Jupyter Notebooks as a natural choice for interactive coding [21, 24, 54]. However, task data and settings often constrain allowed actions to a closed domain of code and libraries [24, 54], use evaluation procedures or met- rics that may not generalize [21], require human-in-the-loop participation (i.e. create task contexts, write problems, evaluate execution per task instance) [24], or are Python-exclusive [21, 24, 54,
2306.14898#8
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
9
Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation in two different formats, Declarative and Interrogative. To improve the robustness and flexibility of the evaluation on visual instruction tuning, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE) to assess the LMM output in two different aspects: Relevancy to evaluate the instruction-following performance and Accuracy to measure the visual hallucination in the LMM output. GAVIE does not require human-annotated groundtruth answers [32] and can be easily adapted to different formats instead of specific designs
2306.14565#9
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
9
Step-1: Generating noun-chunk-bounding-box pairs Given an image-text pair, we first extract noun chunks from the caption and associate them with image regions using a pretrained detector. As illustrated in Figure 3, we use spaCy [HMVLB20] to parse the caption (“a dog in a field of flowers") and extract all noun chunks (“a dog”, “a field” and “flowers”). We eliminate certain abstract noun phrases that are challenging to recognize in the image, such as “time”, “love”, and “freedom”, to reduce potential noise. Subsequently, we input the image and noun chunks extracted from the caption into a pretrained grounding model (e.g., GLIP [LZZ+22]) to obtain the associated bounding boxes. Non-maximum suppression algorithm is applied to remove bounding boxes that have a high overlap with others, even if they are not for the same noun chunk. We keep noun-chunk-bounding-box pairs with predicted confidence scores higher than 0.65. If no bounding boxes are retained, we discard the corresponding image-caption pair. 2A subset of GRIT can be downloaded at https://aka.ms/kosmos-2. 3
2306.14824#9
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14824
10
2A subset of GRIT can be downloaded at https://aka.ms/kosmos-2. 3 Sentence dependency relations adogina Identify Expand — —_—__- field of flowers noun chunks noun chunks adog a dog in a field of flowers os Keep “a dog in a field of flowers” a field a field of flowers ig Drop “a field of flowers” substrings ; flowers flowers Drop “flowers’ Detection & eu Post-process Pe a dog: a dog ina field of [290,371 605,750] flowers: a field: [290,371 ,605,750] [0,264,919,921] Step-1: Creating noun Step-2: Producing referring chunk - bounding box pairs expression - bounding box pairs Figure 3: The pipeline of constructing web-scale grounded image-text pairs.
2306.14824#10
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
10
Execution-based evaluation for coding. Evaluation for NL-to-code generation models has recently shifted away from surface form similarity metrics (BLEU [37, 2], ROUGE [31], Exact Match) towards execution oriented ratings (unit tests [4, 9, 21, 24, 20], output matching [16, 21, 62]). The rigidity of surface form analysis overlooks code syntax features, ignores execution effect, or over-penalizes alternative solutions [63], On the contrary, execution-based assessment is a more thorough and comprehensive score of code functionality [20] and is a more natural fit for open-domain program usage that does not constrain code generation to a subset of the language space [48]. However, for newer benchmarks and datasets that put forth task definitions incorporating execution-based evaluation (APPS [20], ExeDS [21], ODEX [48]), the fundamental code generation task (Context + Code → Execution → Score) is still devoid of interaction. InterCode combines execution-based evaluation with flexible task construction, enabling more diverse problem-solving paradigms within a unified coding task formulation. InterCode’s use of virtual containers as execution sandboxes protect against harmful actions and allow for advanced evaluation criteria beyond the aforementioned ones.
2306.14898#10
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
11
(Pos) Multi-Choice VQA (Pos) Object Detection Q: Choose the correct statement about the weather conditions in the image: (a) Cloudy and rainy, (b) Clear blue sky, (c)Foggy and misty, (d) Snowy and cold A: (b) Clear blue sky Q; What objects are on toddler’s feet? A: White socks (Pos) Object Attribute Detection (Pos) OCR Q: Describe the girl's hair color and whether she is wearing any accessory on her wrist. A: The girl’s color is black and the she is wearing a bracelet on her wrist. Q: What is the number written on the lead snowboarder? A: The number is 88 (Pos) Visual Entailment (Pos) VCR Q: Verify if this statement is correct: "There is a car parking in the image." = A: True : What appears to be the purpose of the green street sign? A: Giving directions or providing street name. (Neg) Nonexistent Object Manipulation (Neg) Existent Object Manipulation Q: Can you spot a hot air balloon above the tall tree? Q; Is the woman in the blue pants and pink shirt A: There is no hot air balloon mentioned in the image. sitting on the floor?
2306.14565#11
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
11
Figure 3: The pipeline of constructing web-scale grounded image-text pairs. Dataset Flickr Entities [PWC+15] RefCOCOg [MHT+15] RefCOCO [YPY+16] RefCOCO+ [YPY+16] Visual Genome [KZG+16] Images 31,783 26,711 19,994 19,992 108,077 Objects 275,775 54,822 50,000 49,856 4,102,818 513,644 85,474 142,209 141,564 - - 8.43 3.61 3.53 - GRIT (Ours) 90,614,680 137,349,210 114,978,233 4.7 # Text Spans Avg Expression Length Table 1: Comparison GRIT with existing visual grounding datasets.
2306.14824#11
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
11
Methods for interactive or execution-based coding. The value of generative code models and interactive problem solving has motivated a recent proliferation of work to augment reasoning capabilities’ of existing language models [51, 40, 43, 50, 60, 12] or propose new modeling techniques to tackle coding as a sequential decision making and reasoning tasks [6, 11, 17, 29, 8, 25], of which evaluation is unit test based. Approaches that leverage execution typically use re-ranking [61, 35, 53, 58] or majority vote [11, 29, 39] to decide on a final prediction. Additional work also explores incorporating human-in-the-loop [7, 23], compilers [44], and text [45, 59] feedback. A common thread among these contributions is that 1) the task setting can only provide the investigated form of feedback and 2) sought-after capabilities are exemplified by strong performance on favorably curated tasks and datasets, rendering comparisons across benchmarks tedious. InterCode has the potential to standardize the evaluation of these methods because 1) the interactive coding task is a conglomeration of many interesting interaction, reasoning, and decision-making challenges and 2) InterCode’s task construction makes it possible to incorporate a wide variety of sources of feedback. # 3 The InterCode Benchmark # 3.1 Formulation
2306.14898#11
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
12
balloon above the tall tree? Q; Is the woman in the blue pants and pink shirt A: There is no hot air balloon mentioned in the image. sitting on the floor? A: There is no hot air balloon mentioned in the image. (Neg) Knowledge Manipulation (Neg) Knowledge Manipulation Q; Was the youth employment rate in Ukraine gradually decreasing from 2010 to 2015? A: No.The youth employment rate in Morocco gradually increasing from 2010 to 2015. Q: Did Merkel arrive to the Los Angeles Get Out The Vote Rally? No, Hillary Clinton arrived to the Los Angeles Get Out The Vote Rally in the image.
2306.14565#12
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
12
# Text Spans Avg Expression Length Table 1: Comparison GRIT with existing visual grounding datasets. Step-2: Producing referring-expression-bounding-box pairs In order to endow the model with the ability to ground complex linguistic descriptions, we expand noun chunks to referring expressions. Specifically, we use spaCy to obtain dependency relations of the sentence. We then expand a noun chunk into a referring expression by recursively traversing its children in the dependency tree and concatenating children tokens with the noun chunk. We do not expand noun chunks with conjuncts. For noun chunks without children tokens, we keep them for the next process. In the example shown in Figure 3, the noun chunk ‘a dog’ can be expanded to “a dog in a field of flowers”, and the noun chunk ‘a field’ can be expanded to “a field of flowers”.
2306.14824#12
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
12
# 3 The InterCode Benchmark # 3.1 Formulation The InterCode benchmark formalizes interactive coding with execution feedback as a partially observable Markov decision process (POMDP) (U, S, A, O, T , R) with instruction space U, state 3 Action Space Environment Dataset Reward Function Bash SQL Python Ubuntu Terminal MySQL Database Python Interpreter MBPP [4] (117) NL2Bash [32] (200) Spider 1.0 [55] (1034) Latest Std. Output Submitted Function Table 1: Rundown of the two environments with Bash and SQL as action spaces developed using the InterCode framework. The numbers in parentheses refer to the number of task instances adopted from each dataset. Each environment is defined in under 200 lines of code total. Specific discussion of the environment construction and reward function can be found in § A.2 and § A.3
2306.14898#12
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
13
Figure 2: Examples of positive and negative instances in our LRV-Instruction dataset. RED means inconsistent elements in the negative instructions. More examples are in the Appendix. in [19]. From our experiments, we show that GAVIE is not only stable but also aligns with human evaluation. We empirically evaluate five publicly available LMMs [44; 26; 8; 11; 42] on our benchmark and found that existing LMMs seriously hallucinate when prompted with our negative instructions, especially with Existent Object Manipulation and Knowledge Manipulation instructions. We further verify the effectiveness of our LRV-Instruction by finetuning MiniGPT4 [44] and mPLUG-Owl [42] on this more balanced data. Our instruct-tuned LMMs suffer much less from hallucination and achieve state-of-the-art performance compared to the original MiniGPT4, LLaVA [26], InstructBLP [8], mPLUG-Owl [42] and MMGPT [11] on both our evaluation set and public benchmarks [19; 12; 9]. We also observe that Existent Object Manipulation and Knowledge Manipulation instructions are more challenging than Nonexistent Object Manipulation instructions for LMMs. Furthermore, a robust model performance requires a balanced ratio between positive and negative instances. To sum up, our contributions are three-fold:
2306.14565#13
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
13
Furthermore, we only retain referring expressions or noun chunks that are not contained by others. As shown in Figure 3, we keep the referring expression “a dog in a field of flowers” and drop “a field of flowers” (as it is entailed by “a dog in a field of flowers”) and ‘flowers’. We assign the bounding box of the noun chunk (‘a dog’) to the corresponding generated referring expression (“a dog in a field of flowers”). In the end, we obtain approximately 91M images, 115M text spans, and 137M associated bounding boxes. We compare GRIT with existing publicly accessible visual grounding datasets in Table 1. Data samples of GRIT are shown in the Appendix. # 3 KOSMOS-2: A Grounded Multimodal Large Language Model
2306.14824#13
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
13
space S, action space A, observation space O, transition function T : S × A → S, and reward function R : S × A → [0, 1]. Given a coding instruction u ∈ U in natural language, an agent issues code or a special submit keyword as an action at ∈ A. An action is admissible [49] if it can be parsed and executed in the compiler/interpreter environment, and an admissible action incurs a change in the latent state space st+1 ∈ S, and an execution feedback as observation ot+1 ∈ O. The interaction loop repeats until the submit action is issued, wherein the task episode ends and a reward r = R(sT , submit) ∈ [0, 1] is computed, with 1 representing task completion. We use the Success Rate (SR) metric, defined as the proportion of task episodes where r = 1. We also define the Error % metric, which is the percentage of non admissible actions across task episodes. # 3.2 Construction pipeline At a high level, InterCode decomposes the construction of an interactive coding task into three modular parts: (1) environment construction, (2) data collection, and (3) reward design. This workflow allows for the safe execution of transition functions, flexible reward design, and convenient adaptation of existing instructions to an interactive setting.
2306.14898#13
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
14
• We build LRV-Instruction, a large and diverse dataset containing 400k visual instructions, with 16 vision and language tasks and negative instructions in different semantic levels and styles. • We propose GAVIE, a novel approach to evaluate visual instruction tuning without requiring groundtruth answers and pre-designed instruction formats. • We conduct comprehensive experiments to investigate the hallucination of current LMMs. The empirical study validates the effectiveness of LRV-Instruction for robust visual instruction tuning. # 2 Related Works
2306.14565#14
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
14
# 3 KOSMOS-2: A Grounded Multimodal Large Language Model KOSMOS-2 is a grounded multimodal large language model, which integrates grounding and referring capabilities compared with KOSMOS-1. The model can accept image regions selected by the user using bounding boxes as input, provide visual answers (i.e., bounding boxes), and ground the text output to the visual world. KOSMOS-2 adopts the same model architecture and training objective as KOSMOS-1. We add grounded image-text pairs into the training data to endow the model with grounding and referring capabilities. For a text span (such as noun phrase and referring expression) and its corresponding bounding boxes in a grounded image-text pair, We discretize continuous coordinates of bounding boxes into a sequence of location tokens to encode with text tokens in a unified way. Then we link the location tokens and their corresponding text span via a “hyperlink” data 4 format. The model is trained to establish a mapping between image regions and their corresponding location tokens and connect the image regions with their associated text spans. # 3.1 Grounded Input Representations
2306.14824#14
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
14
Docker-based environments. InterCode uses Docker [34] virtual containers as a general-purpose execution sandbox. Given a Dockerfile that defines a system and execution entrypoint, InterCode creates a corresponding, stateful virtual container that hosts the desired state space and transition function. We choose Docker as the basis of InterCode’s environment construction for its safe execution in virtual containers, reproducibility of a Dockerfile across any Docker-equipped machine, and excellent coverage of application code, libraries, and dependencies offered by the Dockerfile DSL. Data collection. InterCode requires that a dataset has at minimum two fields: query, a natural language instruction u ∈ U, and gold, an answer or code block that is a procedure for generating the correct answer. We define these conditions to make it easy to adapt existing text-to-code datasets to an interactive setting while also leaving plenty of bandwidth for constructing new tasks and datasets.
2306.14898#14
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
15
Early explorations [39; 17; 18; 35] of vision and language pre-trained models tend to use Bert-based [28; 13] models as the language decoder. Inspired by the recent success of large language models [37; 10; 43; 20; 22] and datasets [21; 6; 14; 24; 33; 34; 23], many studies [1; 16; 15] have been focused on improving vision-language pre-trained models by integrating powerful LLMs with in-context or few-shot learning capability. More recently, some visual instruction-tuned LMMs [44; 26; 11; 8] have emerged, showing excellent generalization performance in unseen VL tasks. Specifically, LLaVA [26] projects the output of a visual encoder as input to LLaMA [37] and trains both the alignment network and the LLM on synthetic data. MiniGPT4 [44] is built on BLIP-2 but uses Vicuna [7] as the language decoder. It only finetunes the cross-modal alignment network on longer image captions from ChatGPT. The research approaches [11; 8] are instruction-tuned on a collection of VL datasets, but InstructBLIP [8] uses BLIP2 [16] as the backbone
2306.14565#15
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
15
4 format. The model is trained to establish a mapping between image regions and their corresponding location tokens and connect the image regions with their associated text spans. # 3.1 Grounded Input Representations Given a text span and its associated bounding boxes in a grounded image-text pair, we first convert the continuous coordinates of bounding boxes into a sequence of discrete location tokens [CSL+21]. For an image with width W and height H, we evenly divide both the width and height into P segments each. P × P bins are obtained and each bin consists of (W/P ) × (H/P ) pixels. For each bin, we use a location token to represent the coordinates within that bin. We use the coordinates of the center pixel of each bin to determine bounding boxes on the image. In total, P × P location tokens are introduced, and these tokens are added to word vocabulary to enable unified modeling with texts.
2306.14824#15
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
15
Reward design. Across a single task episode, the action, observation, and state modification (if any) per interaction loop are implicitly logged by InterCode. InterCode’s default reward function determines task completion via an exact match of the agent’s execution output (observation and state modifications) against the gold command, where 1 is awarded only if all components match. Since Exact Match is usually too stringent of an evaluation criteria, InterCode exposes a reward function endpoint that has access to both the interaction history and the execution container, allowing for custom reward function definitions that can incorporate multiple signals. # 3.3 Implementations Following the procedure discussed in Section 3.2, we create two separate InterCode based environ- ments where Bash and SQL are the action spaces respectively. Table 1 summarizes them. InterCode-Bash. We define a bash shell within an Ubuntu Operating System as the task setting. To evaluate an agent’s ability to adapt generations to different situations, we architect four distinct file systems that can be swapped into the Bash environment by changing a single line in the Dockerfile.
2306.14898#15
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14824
16
The bounding box can be represented using its top-left point (x1, y1) and bottom-right point (x2, y2). We discretize the top-left and bottom-right corner points to location tokens, respec- tively. We concatenate the top-left location token <loc1>, the bottom-right location token <loc2>, and special boundary tokens <box> and </box>, to represent a single bounding box: “<box><loc1><loc2></box>”. If the text span is associated with multiple bounding boxes, we use a special token <delim> to concatenate the location tokens of these bounding boxes: “<box><loci Then we arrange the text span and its associated location tokens in a format resembling a “hyperlink” in markdown. For the text span with a single bounding box, the resulted sequence is “<p> text span </p><box><loc1><loc2></box>”, where <p> and </p> are special tokens indicating the beginning and end of the text span. The data format tells the model that image regions within the bounding box are associated with the text span. For the example shown in Figure 1, the input representation is:
2306.14824#16
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
16
We bootstrap the NL2Bash [32] dataset (which lacks specificity in queries and grounding to any underlying file system, preventing it from being used directly for interactive evaluations) to create an interactive coding task where an agent completes an instruction via bash actions. Transferring NL2Bash to the interactive task setting requires simple transformations to ground instructions and gold code blocks in the file system. First, we consider a subset of 1000 commands with each 4 having ≥ 4 utilities. We then filter out commands that are non-UNIX, non-Linux, or use utilities we currently do not support (eg. "ssh", "sudo", time, and GUI-dependent utilities). Finally, we enhance under-specified commands with specific file names/directory names/paths and update deprecated utilities/flags. The resulting 200 commands are grouped into 4 disjoint sets, 3 of which were grounded to custom-designed file systems, while one set is file-system agnostic. This categorization allows for a comprehensive evaluation of different command-grounding scenarios.
2306.14898#16
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
17
3 dataset with 16 vision and language tasks and negative instructions in different semantic levels and styles. This can help improve the robustness of current LMMs. Although LMMs are powerful in solving VL tasks, they also suffer from the hallucination inherited from LLM. Popular image captioning metrics like CIDEr [38] SPICE [2] do not appropriately penalize hallucination. CHAIR, [32], is unstable and needs complex human-crafted parsing rules for exact matching. Alternatively, [19] converts the hallucination into a binary classification problem. However, it requires the input questions to follow specific templates, such as "Is there a/an <object> in the image?". In comparison, our proposed GAVIE can evaluate model hallucination in an open-ended manner without needing human-annotated groundtruth answers. 3 LRV-Instruction Annotating large-scale visual instruction data can be challenging and time-consuming [40]. It involves expertly written detailed instructions and specific labels for different tasks. Inspired by the success of GPT4 in text-annotation tasks [10], we leverage GPT4, instead of human workers, to build LRV- Instruction. LRV-Instruction is designed to cover a variety of VL tasks, with open-ended positive and negative instructions (Fig. 2) in different linguistic styles.
2306.14565#17
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
17
For the example shown in Figure 1, the input representation is: <s>  <grounding> <p> It </p><box><loc44><loc863></box> seats next to <p> a campfire </p><box><loc4><loc1007></box> </s> where <s> and </s> indicate start- and end-of-sequence, and  represent the beginning and end of encoded image embeddings. <grounding> is a special token to tell the model ground the text output to the visual world. We map input text tokens and location tokens to embeddings via a lookup table. Following KOSMOS-1, a vision encoder and a resampler module are used to obtain image embeddings for input images. For language-only data, cross-modal paired data (i.e., image-text pairs), and interleaved multimodal data, we use the same input representations as of KOSMOS-1. # 3.2 Grounded Multimodal Large Language Models
2306.14824#17
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
17
The InterCode-Bash dataset instructions typically make one or both of the following two types of requests. It either 1. Requests information that can be answered via execution output (i.e. "How many files...", "What is the size of...", "Where is <file> stored?") or 2. Requests a change to the location/- configuration/content of a file or folder (i.e. "Move dir1 folder...", "Set permissions of...", "Append a line to..."). Therefore, we define a custom reward function that evaluates an agent’s performance against file system modifications and the latest execution output. Execution output is graded with a simple lexical similarity function. File system assessment is done in two parts. First, a comparison of the agent’s and gold command’s list of file system changes (list of [path, modification type ∈ [added, changed, deleted]] entries) reveals any extraneous or missing changes. Second, md5sum hashes of each commonly edited file path are compared to determine if an added or changed file was altered correctly. A max score of 1 is achieved only if the correct file paths are changed, the changes are correct, and the latest execution output matches the gold command output exactly. Additional Bash statistics and design details are discussed in § A.2.
2306.14898#17
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
18
Positive Visual Instruction Generation. Inspired by [40], we use the in-context few-shot learning ability of GPT4 to generate instruction data for various VL tasks automatically. We filter the output tasks manually and select 16 tasks (Tab. 3a) with text answers. In contrast with [26] using a few scene captions to represent an image as input to the text-only GPT4, we take advantage of the Visual Genome dataset [14], which has detailed visual information like image size, bounding boxes, and dense captions. Specifically, each image typically has 21 object regions and their corresponding captions. We leverage GPT4 to create the instruction-following data with the image size, bounding boxes, and dense captions as the "visual" input as if it can "see" the image. An example is shown in Fig. 3. For each image, we randomly select 10 tasks. To enrich the instructions, we ask GPT4 to generate instances in both declarative and interrogative formats. The limitation of [26; 44] is that synthetic visual instructions are generally longer and may involve unexpected descriptive information inconsistent with the image. Therefore, we explicitly instruct GPT4 with "The answers should be less than 30 words" to reduce the chance of generating extra unrelated information in the training data.
2306.14565#18
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
18
# 3.2 Grounded Multimodal Large Language Models Based on KOSMOS-1, KOSMOS-2 enhances multimodal large language models by incorporating grounding and referring capabilities. KOSMOS-2 also uses a Transformer-based causal language model as the backbone and is trained with the next-token prediction task. In addition to multimodal corpora used in KOSMOS-1 (including text corpora, image-caption pairs, and interleaved image-text data), we add grounded image-text pairs into training. The training loss only considers discrete tokens, such as text tokens and location tokens. The model can learn to locate and understand image regions by their location tokens and the whole image, associate text spans to image regions, and output bounding boxes of the image region using location tokens.
2306.14824#18
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
18
InterCode-SQL. We write a Dockerfile that defines a SQL interpreter within a MySQL database as the task setting. To create the databases and tables necessary for the task dataset, we write type resolution scripts and perform database conversions using the sqlite3mysql [41] Python library to adapt the Spider [55] database and table schema to a MySQL format. We then consolidate all setup code into a single, unified MySQL .sql dump that contains the complete set of schemas for all tables across 20 different databases. On container start-up, this file is invoked automatically, creating and populating databases with tables and tables with records. The Spider [55] dataset is a large-scale cross-domain dataset originally meant for evaluating SQL query generations from natural language questions. We adapt the development set, which contains 1034 task instances, and remove all extraneous columns aside from the natural language questions and gold SQL command. The instruction and gold values do not require any additional pre-processing to be compatible with the MySQL task environment.
2306.14898#18
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
19
To improve the diversity of images, we collect chart images from [36], which has human-annotated captions describing the construction and patterns of charts. We also select news images from [24] with many named entities in the captions. We ask GPT4 to generate question-answers pairs with captions as visual input. The last two images in Fig. 2 are examples. More examples and the general prompt we use are shown in the Appendix (Fig. 5, 32). Negative Visual Instruction Generation. As shown in [19], current LMMs tend to answer “Yes” by following any instruction presented to the model rather than predicting a faithful answer. To teach LMMs [44; 26; 11; 8] to answer questions in instructions faithfully, we introduce three categories of negative instructions based on Visual Genome dataset: (1) Neg1: "Nonexistent Object Manipulation" by introducing nonexistent objects, activities, attributes and interactions to the "visual" input as described above. (2) Neg2: "Existent Object Manipulation" by manipulating existent objects with inconsistent attributes (Fig. 2). (3) Neg3: "Knowledge Manipulation" by manipulating knowledge in instructions (Fig. 2). As for the detailed prompt of Neg1, we leverage the same format of the "visual" input as shown in Fig. 3. Additionally, we provide the following instructions to GPT4:
2306.14565#19
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
19
KOSMOS-2 shows new capabilities of grounding and referring. The referring capability enables us to point out image regions with bounding boxes. KOSMOS-2 can understand the image regions users refer to by the coordinates of bounding boxes. The referring capability provides a new interaction method. Different from previous MLLMs [ADL+22, HSD+22, HDW+23], which can only provide text output, KOSMOS-2 can provide visual answers (i.e., bounding boxes) and ground text output to the image. The grounding capability enables the model to provide more accurate, informative, and comprehensive responses. In addition to vision, language, and vision-language tasks evaluated in KOSMOS-1, the model can be used for more downstream tasks, such as grounded image-captioning, grounded VQA, referring expression comprehension and generation. 5 # 3.3 Model Training
2306.14824#19
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
19
Finally, we employ Intersection over Union (IoU), or more formally the Jaccard Index, to quantify the correctness of the latest execution output generated by the agent against the gold output, where both outputs are a list of records. A non-tabular execution output receives a reward of 0 by default. Among the items that lie in the intersection of the agent and gold execution outputs, we also apply a penalty if the records are in the incorrect order. To quantify how sorted the agent output is relative to the gold output, we lean on Kendall’s τ and adjust the output range to [0, 1]. The IoU score is then directly scaled by this coefficient. All in all, only a correctly ordered list with the exact set of records found in the gold output receives a score of 1. Visualizations like Figure 1 for SQL along with a more extensive implementation discussion for this environment are in § A.3 InterCode-Python. In this setting, we define a Python interpreter running within an Ubuntu operating System as the task setting. The Dockerfile can be configured to run any Python version. The interpreter is not initialized with any dependencies, but PyPI packages can be installed and used by the agent.
2306.14898#19
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
20
"Come up with 6 misleading instructions with nonexistent elements (nonexistent objects, nonexistent activities, nonexistent attributes, nonexistent interactions) in the images with different language styles. The instructions should contain interrogative and declarative sentences. Please also ex- plain the reason." We replace the underlined text with "existing objects but wrong attributes" for the prompt of Neg2. As for the Neg3: knowledge manipulation, we use GPT4 to manipulate the knowledge in the captions, including named entities, events or keywords. After that, GPT4 is instructed to generate questions and answers indicating correct knowledge. More examples are shown in the Appendix (Fig. 6, 32). Quality Control. We first remove instances with answers longer than 30 words. We remove the instances mentioning unneeded content like "bounding box description", "given caption", and "existing descriptions". Additionally, GPT4 will output the task name for each instruction. However, we found GPT4 sometimes assigns inaccurate task names for the instructions. As a result, we exclude 4
2306.14565#20
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
20
5 # 3.3 Model Training Training Setup We train the model on newly added grounded image-text pairs, monomodal text corpora, image-caption pairs, and interleaved image-text data. Our training process involves a batch size of 419K tokens, consisting of 185K tokens from text corpora, 215K tokens from original and grounded image-caption pairs, and 19K tokens from interleaved data. We train KOSMOS-2 for 60k steps, equivalent to around 25 billion tokens. The AdamW optimizer is employed with β = (0.9, 0.98). We set the weight decay to 0.01 and the dropout rate to 0.1. The learning rate increases to 2e-4 during the first 375 warm-up steps and linearly decays to zero. We train the model on 256 V100 GPUs and the training takes approximately one day to complete. In order to tell the model when to ground text output to the visual world, we prepend the ‘<grounding>’ token to the grounded caption during training.
2306.14824#20
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
20
We use the MBPP [4] dataset which presents the code completion task of synthesizing Python code from a method header and docstring. Evaluation of correctness is performed with an associated set of unit tests given by MBPP. The MBPP dataset is straightforward to adapt to the interactive setting, requiring no modifications to the query or evaluation components. Finally, we directly inherit MBPP’s evaluation procedure of proportion of unit tests passed. With InterCode, it is easy to use existing datasets to evaluate how well models can use different programming languages as actions. Validations. To verify the functionality of action execution in the task environment and the correctness of custom reward functions, we write testing scripts for both Bash and SQL that pass the gold command in as a dummy agent’s action to ensure that the command is admissible and executes without error, and to verify that the reward received by the command is 1. To confirm that InterCode’s dataset specification is enforced across multiple accepted file formats, we define a custom InterCode data loader class which is then rigorously unit tested. 5
2306.14898#20
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
21
4 ✔ ✔ ✔ ✔ Hard Negative Instructions? Self Generated Instruction? Address Hallucination? NOT Template Instruction? # of Self-Generated Instances 400k # of VL Tasks 16 ✘ ✘ ✘ ✘ 3k 1 ✘ ✔ ✘ ✔ 150k 3 ✘ ✘ ✘ ✘ ✘ 11 ✘ ✘ ✘ ✘ ✘ 5 ✘ ✘ ✘ ✔ ✘ 3 # Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl Table 1: A comparison of LRV-Instruction with datasets used by current LMMs.
2306.14565#21
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
21
Following KOSMOS-1, the vision encoder has 24 layers with 1,024 hidden size and 4,096 FFN intermediate size. The multimodal large language model component is a 24-layer MAGNETO Transformer [WMH+22, MWH+22] with 2,048 hidden dimensions, 32 attention heads, and 8,192 FFN intermediate size. The total number of trainable parameters amounts to approximately 1.6B. The image resolution is set to 224×224 and the patch size is 14×14. We divide the width and height of the image into 32 bins, with each bin consisting of 7×7 pixels. A total of 32×32 location tokens are added to the vocabulary. KOSMOS-2 uses the weights of KOSMOS-1 for initialization, the newly added word embeddings of location tokens are initialized randomly. We update all the parameters during training and instruction tuning.
2306.14824#21
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
21
Plan & Solve ' 1 1 Action Evaluate 1! Initial Msg. ' <action> . @ [Llet’s understand the problem and devise |, > after first 1 plan... Then, let's carry out the plan... ' 1 1 1 1 Initial Msg. You are helping a user who is trying to do something ina What is the country code and first name... : Single Turn ; action Question Question TELLLILILILLL LLL LL LL ia bee eee PEELE EEELEEEl (Lihat is the country code and first name... “Try Again” ReAct Agent ' Ce] eat 1 4 Execute Msg. u @ You will now execute your own plan. " Interact with a MySQL DB using SQL... Question " i : : : : : : : : ' s What is the country code and first name... | "1 ‘Agent ' " " <action |=) jp Upto Agent Agent : actions | C2) 1" [enought > <action> | (=p Observation () plan ' Upton! el : ' v : : : : : : : Initial Msg. You are a SQL code generator helping... Your task is to interact with a MySQL DB... Initial Msg. Interact with a MySQL DB using SQL to... Use interleaving Thought, Action, Obs... Question Gap
2306.14898#21
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
22
Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: elephant heard on rocks X: 73 Y: 80 Width: 418 Height: 418 woman wearing long dress X: 176 Y: 298 Width: 35 Height: 83 group of green chairs X: 153 Y: 326 Width: 95 Height: 126 an orange bucket on the ground X: 91 Y: 341 Width: 38 Height: 36 a group of white umbrellas X: 99 Y: 82 Width: 112 Height: 28 aman in an orange shirt X: 204 Y: 265 Width: 31 Height: 47 a woman wearing a yellow dress X: 169 Y: 298 Width: 47 Height: 76 Task: image captioning, Image Sentiment Analysis, Image Quality Assessment, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA... Come up with 20 diverse instructions for all the tasks above with different language styles and accurate answers. The instructions should contain interrogative sentence and declarative sentences. The answers should be less than 30 words. Each task should have less than 3 instructions. GPT4 OUTPUT Example: Instruction: Craft a brief narrative about the baby elephant and adult elephant. Answer: A baby elephant is depicted behind an adult elephant, possibly seeking protection.
2306.14565#22
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
22
Instruction Tuning After the model is trained, we perform instruct tuning to better align KOSMOS-2 with human instructions. we combine vision-language instruction dataset (i.e., LLaVA- Instruct [LLWL23]) and language-only instruction datasets (i.e., Unnatural Instructions [HSLS22] and FLANv2 [LHV+23]) with the training data to tune the model. In addition, we construct grounded instruction data by utilizing the pairs of bounding boxes and expressions (i.e., noun phrases, and referring expressions) in GRIT. Given an expression-bounding-box pair, we use “<p> expression </p>” as the input instruction, and prompt the model to generate the corresponding location tokens of the bounding boxes. We also use the prompt like “<p> It </p><box><loc1><loc2></box> is” to ask the model to generate expressions according to its bounding boxes. Table B in Appendix presents more templates. # 4 Evaluation We first evaluate KOSMOS-2 on multimodal grounding and multimodal referring tasks to assess the new capabilities, and then test the model on language and perception-language tasks evaluated in KOSMOS-1. Multimodal grounding – Phrase grounding – Referring expression comprehension Multimodal referring – Referring expression generation
2306.14824#22
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
22
Your task is to interact with a MySQL DB... Initial Msg. Interact with a MySQL DB using SQL to... Use interleaving Thought, Action, Obs... Question Gap Litatis the country code and first name.. length Observation times ‘Observation (ie Lestt_cutput, rewara> We Leste output Vv v Terminate when plan is Terminate when reward =1 |! Terminate when thought chai completed or or exceeds n turns finishes or exceeds n turns n turns exceeded
2306.14898#22
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
23
Figure 3: One example to illustrate the prompt we use to generate the visual instruction data by GPT4. We use the bounding box coordinates and dense captions to represent image content. the task name in our release data. Furthermore, we remove the instructions asking about facial expressions. This is because the Visual Genome dataset doesn’t include facial expression attributes in the ground truth dense captions. To examine the quality of our dataset, we randomly sample 500 instances and ask ten expert annotators to determine whether the output answers from GPT4 are correct or not, with regard to the instruction and the image content. We found 91% of the instructions are appropriate for the image inputs. Furthermore, 85% of outputs are acceptable responses to the instructions. Even though some responses may contain errors, most generations conform to the correct structure, serving as applicable visual instruction tuning guidelines. We created a total of over 400k visual instructions after filtering.
2306.14565#23
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
23
Multimodal grounding – Phrase grounding – Referring expression comprehension Multimodal referring – Referring expression generation Perception-language tasks – Image captioning – Visual question answering Language tasks – Language understanding – Language generation # 4.1 Multimodal Grounding In order to evaluate the ability of multimodal grounding, we test KOSMOS-2 on widely used phrase grounding and referring expression comprehension tasks in a generation manner. Phrase grounding 6 <box> <loCg5> <lOC3g9> </box> <box> <locgg> <l0C495> </box> t t Grounded MLLM Grounded MLLM t t Aman ina blue hard <p> A man in a blue hat and <p> orange hard hat and orange safety vest </p> safety vest </p> (1) Phrase grounding (2) Referring expression comprehension Figure 4: Input format of evaluation on (1) phrase grounding and (2) referring expression comprehen- sion. task requires the model to predict a set of bounding boxes based on one or more given phrases that maybe interrelated within a single caption. Referring expression comprehension task encourages the model to locate the object described in a text referring expression within a given image.
2306.14824#23
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
23
Figure 2: Overview of Prompting Strategies adjusted for evaluation on InterCode. The "Try Again" termination constraint is conditioned on reward = 1, while ReAct [51] and Plan & Solve [43] are determined by the agent itself. This is because the purpose of the "Try Again" method is to explore how capable agents are at error correction from feedback, while the other two are more concerned with the overall success of general problem-solving strategies. # 4 Methods We perform preliminary experiments to gauge the proficiency and behavior of current large language models on interactive coding tasks with Bash and SQL. To observe and elicit relevant reasoning skills, we draw on several existing prompting strategies that have been put forth to augment language models’ reasoning and problem-solving skills. We apply these prompting strategies to models across the following three families: OpenAI (text-davinci-003, gpt-3.5-turbo, gpt-4), PaLM-2 (text-bison-001, chat-bison-001) [3], and Open Source (Vicuna-13B [13], StarChat-16B [28]). Figure 2 visualizes the four adjusted prompting strategies we evaluate on InterCode.
2306.14898#23
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
24
Evaluation Set. After the processing above, we randomly select 1000 instances as our evaluation set. Furthermore, we manually check the quality of all instances and see whether the instruction describes a valid task. If it’s not, we edit the instruction to make it clearer for LMMs. For example, we edit the instruction ‘Observe the beautiful rainbow-colored sign that says ’Le Louvre’. You won’t miss it!’ to "Are you able to observe the beautiful rainbow-colored sign that says ’Le Louvre’ in the image?" # 3.1 Data Statistics Tab. 1 shows a comparison of LRV-Instruction and other datasets used by current LMMs. LRV- Instruction covers much more VL tasks than existing visual instruction tuning datasets. Instead of only using positive instructions, LRV-Instruction also includes negative instructions at different semantic levels. In addition, employing the GPT4-assisted generation, LRV-Instruction has more open-ended instructions instead of following a few templates. From Fig. 4 (b), we observe that instructions with non-existing objects generated by GPT4 are diverse and physically plausible in the image, including “birds in the sky” or replacing ‘elephant’ with ‘zebra’. Fig. 10 in the appendix 5
2306.14565#24
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
24
By testing KOSMOS-2 on these two tasks, we can assess how well the model performs in grounding text descriptions to the visual world, which is crucial for developing advanced AI systems capable of handling complex multimodal tasks. For both phrase grounding and referring expression comprehension tasks, KOSMOS-2 is required to generate location tokens which are then converted to bounding boxes for evaluation. The input format is “<s><grounding>...”, where “<grounding>” is used to prompt the model to generate locations tokens. # 4.1.1 Phrase Grounding
2306.14824#24
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
24
Figure 2 visualizes the four adjusted prompting strategies we evaluate on InterCode. Single Turn is a zero-shot attempt. A model is given a simple description of the task setting and asked to generate code in a specific programming language that would address the query. The first generation in response to the user’s question is then evaluated in the InterCode environment. "Try Again" is an iterative feedback set up. In the initial message, the agent is informed of the task setting and its interactive nature; an agent has multiple turns to interact with the system, wherein each turn, upon generating an action, the execution output of the action is fed back as an observation. This continues until a reward of 1 (task completion) is achieved or the number of turns (n) is exhausted. The agent’s position in this approach is meant to mirror human software development as closely as possible. The goal of this method is to probe language models’ raw interactive coding abilities in addition to illustrating the benefits and different challenges that arise in interactive coding tasks.
2306.14898#24
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
25
5 Tasks of LRV-Instruction Statistic Image Captioning, Object Detection, Image Sentiment Analysis, Image # of Instructions 400k Quality Assessment, Object Interaction Analysis, Image Anomaly # of Images 35k Detection, Referential Expression Grounding, OCR, VCR, Object Attribute # of Interrogative Instruction 250k Detection, Muli-choice VQA, Semantic Segmentation, Dense Captioning, # of Declarative Instruction 150k Visual Entailment, Styled Image Caption, Activity Recognition, Chart # of Negative Instructions 180k Understanding Ave # of Instruction per Image 17.7 - Ave Answer Length 15.2 (a) 16 VL tasks in our dataset. Ave Instruction Length 12.1 Frequency of Non-Existent Objects man dog tree (c) Statistic of LRV-Instruction cat umbrella $k) 0 1000 2000 3000 4000 (b) Statistic of Non-existent objects generated by GPT (a) Distribution of different Manipulation Figure 4: Comprehensive Statistic of LRV-Instruction. In (d), BLUE means existent object manipula- tion. PINK means nonexistent object manipulation. GREEN means knowledge manipulation. shows the diverse distribution of knowledge manipulation, including event, number, date, persons, place, and others. # 4 Visual Instruction Tuning
2306.14565#25
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
25
# 4.1.1 Phrase Grounding We evaluate phrase grounding task on Flickr30k Entities [PWC+15] val and test splits. In order to reduce ambiguity, we do not prompt the model with individual phrases; instead, we use the current phrase along with the preceding words as input where preceding words serve as context: “ ... <p> {phrase} </p>”. For the example shown in Figure 4(1), the model needs to predict the locations of phrases “A man”, “a blue hard hat”, “orange safety vest” and “an intersection” in the caption “A man in a blue hard hat and orange safety vest stands in an intersection.”. To generate the location tokens for the phrase “A man” that is the beginning of the caption, the prompt is “<p>A man</p>”. For the phrase “orange safety vest”, the prompt is “A man in a blue hard hat and <p>orange safety vest</p>”. When multiple men are in the image, the context “A man in a blue hard hat and” explicitly helps the model locate the object to reduce ambiguity.
2306.14824#25
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
25
ReAct and Plan & Solve. We write prompts and design workflows that follow the text and task configurations described in ReAct [51] and Plan & Solve [43] as faithfully as possible. For these two approaches, the termination of a task episode is conditioned upon the agent’s own judgment, as our goal with these methods is to gauge the transferability to and efficacy of existing reasoning frameworks with respect to the interactive coding task. Full prompt templates are included in §B.7. # 5 Experiments # 5.1 Base models comparison Task performances. We first compare the success rate of models in the Single Turn and Try Again settings for both the InterCode-Bash and SQL datasets. From Table 2 and Table 3, we observe 6
2306.14898#25
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
26
shows the diverse distribution of knowledge manipulation, including event, number, date, persons, place, and others. # 4 Visual Instruction Tuning We constructed two current LMMs: MiniGPT4 [44] and mPLUG-Owl [42] as the backbones for visual instruction tuning. MiniGPT4 consists of the Vision transformer [25] backbone as the image encoder, Vicuna [7] as the text decoder and a pre-trained Q-Former to connect them. Vicuna is built upon LLaMA [37] with stronger following ability. Following [44], the Q-Former is designed to extract visual features from the frozen image encoder. Before feeding into the frozen Vicuna as the visual prompt, we use a learnable linear projection layer to narrow the gap between extracted visual features with Vicuna embeddings. mPLUG-Owl comprises a pre-trained visual encoder, a visual abstractor, and Vicuna [7] as the text decoder. The visual encoder is responsible for extracting visual features from the input images, and the visual abstractor distills these features using a set of learnable tokens. The resulting visual features are concatenated with the word embeddings of the input sentence and fed into Vicuna to generate the response. We freeze the visual abstractor and visual encoder. Instead, we adopt the low-rank adaptation [? ] to train the text decoder.
2306.14565#26
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
26
We obtain the location tokens in “<box>...</box>” from the model response and then covert it into bounding boxes. The generated bounding box is correct if its intersection over union (IoU) with the ground-truth bounding box is greater than 0.5. If KOSMOS-2 generates a location sequence that can not be converted correctly (e.g., “<box><loc1></box>”), we treat it as a negative sample. We use ANY-BOX protocol in MDETR [KSL+21]. We report the R@1, R@5, and R@10 metrics, where R@1/5/10 means calculating the recall using the top 1/5/10 generated bounding boxes. If there are fewer than 5 or 10 bounding boxes generated by KOSMOS-2, we use all available bounding boxes for the calculation.
2306.14824#26
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
26
6 InterCode-SQL Model / Hardness Easy Med Hard Extra All Single Turn Try Again (n = 10) Easy Med Hard Extra All text-davinci-003 gpt-3.5-turbo gpt-4 text-bison-001 chat-bison-001 Vicuna-13B StarChat-16B 20.6 22.6 19.8 23.8 18.5 8.1 21.8 4.9 8.3 7.2 10.9 6.5 1.3 7.4 1.7 5.7 4.6 5.7 4.0 0.6 2.9 0.0 3.6 3.0 0.6 0.0 0.0 0.0 7.4 10.5 9.1 11.5 7.9 2.6 8.9 32.4 72.5 87.5 27.0 22.2 18.9 22.3 14.6 44.3 76.7 12.3 7.8 3.4 8.5 5.2 43.7 66.7 5.7 6.9 1.7 2.9 4.2 21.1 52.4 0.6 0.0 0.0 1.2 15.6 47.3 73.7 12.9 9.9 6.3 9.7
2306.14898#26
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
27
# 5 GPT4-Assisted Visual Instruction Evaluation CHAIR [32] was introduced to evaluate object hallucination in image captioning tasks. However, it usually demands complex human-crafted rules. Alternatively, [19; 9] formulate the evaluation of hallucination as a binary classification task that prompts LMM to output "Yes" or "No". However, it is hard to evaluate the LMM output in an open-ended manner. In addition, both methods highly depend on human-annotated groundtruth answers.
2306.14565#27
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
27
Results Table 2 presents results on Flickr30k Entities [PWC+15] val and test splits. KOSMOS-2 achieves impressive zero-shot performance and outperforms GRILL [JMC+23], which relies on an attached detector, by a large margin. Moreover, our model outperforms traditional finetuned VisualBert [LYY+19] model by 7.4% R@1 on both val and test splits. In contrast to other models, KOSMOS-2 does not involve prior designs (e.g., object queries or proposals), leading to similar results among R@1, R@5, and R@10. These results demonstrate that KOSMOS-2 can generate high-quality 7
2306.14824#27
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14565
28
To this end, we introduce GPT4-Assisted Visual Instruction Evaluation (GAVIE) as a more flexible and robust approach to evaluate object-level hallucination. The general prompt we use is shown in the Appendix. GPT4 takes the dense captions with bounding box coordinates as the image content and compares human instructions and model response. Then we ask GPT4 to work as a smart teacher and score (0-10) students’ answers based on two criteria. (1) Accuracy: whether the response is accurate concerning the image content. (2) Relevancy: whether the response directly follows the instruction. We use GPT4-32k-0314 in the experiments. Fig. 1 successfully points out that "dog, ball" is inconsistent with the image, and the response from the MiniGPT4 did not address the instruction. Unlike previous evaluation methods [19; 32], GAVIE does not require human-annotated groundtruth answers and can freely adapt to diverse instruction formats. As for the knowledge level hallucination 6
2306.14565#28
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
28
7 Val Split Test Split Model Zero-shot R@1 R@5 R@10 R@1 R@5 R@I10 VisualBert [LYY* 19] x 70.4 84.5 86.3 71.3 85.0 86.5 MDETR [KSL*21] x 83.6 93.4 95.1 84.3 93.9 95.8 GLIP [LZZ+ 22] x 86.7 964 97.9 87.1 96.9 98.1 FIBER [DKG* 22] x 87.1 96.1 97.4 874 964 97.6 GRILL [JMC*23] v - - - 18.9 53.4 70.3 KOsMos-2 v 77.8 79.2 79.3 78.7 80.1 80.1 Table 2: Phrase grounding results on Flickr30k Entities. We report the R@1, R@5, and R@10 metrics, where R@1/5/10 means calculating the recall using the top 1/5/10 generated bounding boxes.
2306.14824#28
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
28
InterCode-Bash Model / File System 1 2 Single Turn 4 3 All 1 Try Again (n = 10) 2 3 4 All text-davinci-003 gpt-3.5-turbo gpt-4 text-bison-001 chat-bison-001 Vicuna-13B StarChat-16B 10.0 30.0 25.0 15.0 12.1 10.0 15.5 32.1 39.6 37.7 22.6 22.5 24.5 22.6 28.8 33.3 36.7 11.7 16.7 18.3 13.3 33.3 37.0 40.7 22.2 22.2 7.4 22.2 24.6 34.5 34.0 17.0 17.7 16.0 17.7 30.0 45.0 41.7 23.3 13.8 15.0 17.2 52.8 49.1 47.2 28.3 24.5 35.8 30.2 32.2 45.0 51.7 16.7 18.3 25.0 21.7 44.4 48.1 59.2 22.2 22.2 22.2 29.6 38.7 46.5 48.5 22.5 19.2 24.5 23.7
2306.14898#28
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
29
6 Backbone Perception Cognition Backbone Acc(Pos) Acc(Neg) Original MiniGPT4 Finetuned MiniGPT4 Original mPLUG-Owl Finetuned mPLUG-Owl 616.41 895.96 967.34 1298.78 232.71 296.43 276.07 328.21 Original MiniGPT4 Finetuned MiniGPT4 Original mPLUG-Owl Finetuned mPLUG-Owl 0.53 0.58 0.62 0.69 0.54 0.68 0.55 0.78 Table 2: Zero-shot multimodal evaluation on MME [9] of MiniGPT4-7B, mPLUG-Owl-7B between original models and LRV-Instruction-finetuned models. The left chart shows perception and cognition scores. The right chart shows the accuracy on the positive set and the negative set.
2306.14565#29
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
29
Zero- RefCOCO | RefCOCO+ | RefCOCOg shot val testA _ testB val testA —_ testB val test 81.41 87.04 74.17 75.90 81.45 66.70 74.86 75.77 87.51 90.40 82.67 81.13 85.52 72.96 83.35 83.31 90.05 92.93 85.26 84.49 90.10 77.77 84.54 85.20 90.68 92.59 87.26 85.74 90.13 79.38 87.11 87.32 Model UNITER [CLY* 19] MDETR [KSL*21] OFA [WYM*22] FIBER [DKG*+22] Qo VisionLLM [WCC*23] 86.7 - - - - - . : GRILL [IMC+23] - - - - - - - 47.5 Kosmos-2 Y (52.32 5742 47.26 45.48 50.73 42.24 60.57 61.65 Table 3: Referring expression comprehension results on RefCOCO, RefCOCO+ and RefCOCOg. We report the accuracy metric for all methods. locations without the need for post-processing redundant locations. This capability highlights the effectiveness of our model in handling phrase grounding tasks.
2306.14824#29
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
29
Table 3: Success Rate across file systems for single vs. multi-turn evaluation on InterCode-Bash (refer §A.2). To evaluate models’ ability to interact with different task settings, we evaluate disjoint sets of Bash instructions across four different file systems. Best metrics are in bold. that performance across different levels of task difficulty (SQL) and different file systems (Bash) is superior in the interactive setting for all models, with a notable multi-fold increase for GPT-4 (9.1% → 73.7%) on the InterCode-SQL task. Analysis of interactions. Manual inspection of trajectory logs indicates that models actively exercise later turns for discovering relevant context, correcting errors via execution feedback as observations, and solving problems via iteratively constructing and editing actions as affirmed by Figure 3. In addition, models also demonstrate a level of planning and modular problem solving; for instructions with gold commands that chain multiple commands together (i.e. with |, >, or ; in bash) or consist of multiple sub-problems (i.e. subqueries in SQL), models will use observations from solving smaller sub-problems in earlier turns to compose the higher-order action. Trajectories that exhibit these phenomena are in § B.4
2306.14898#29
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14824
30
locations without the need for post-processing redundant locations. This capability highlights the effectiveness of our model in handling phrase grounding tasks. # 4.1.2 Referring Expression Comprehension We assess the referring expression comprehension task using three well-established datasets: Re- fCOCO [YPY+16], RefCOCO+ [YPY+16] and RefCOCOg [MHT+15]. Both RefCOCO and RefCOCO+ were generated through a two-player game, with RefCOCO+ specifically designed to exclude spatial relations, such as “on the left”. RefCOCOg incorporates spatial relations and features longer expressions on average. Different from phrase grounding on Flickr30k entities, we measure this task by using referring expression as the input: “<p> referring expression </p>”. For the example shown in Figure 4(2), the input sequence is “<p>A man in a blue hard hat and orange safety vest</p>”. Similarly, the predicted bounding box is considered correct only if its IOU with the ground-truth bounding box is greater than 0.5. The failed decoded sequence is also treated as a negative sample. We use the first generated bounding box for the query expression to measure the accuracy.
2306.14824#30
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
30
Failure cases. With that said, both Figure 3 exhibits a plateauing in Success Rate and and Error %. This suggests that as the amount of context and feedback builds up, models are less capable of discerning relevant past history toward future actions. In late-turn scenarios, task episode trajectories often reveal repetition of earlier actions, a failure to effectively use recent observations towards deciding an appropriate next action, or an inability to recognize that a current problem-solving chain of thought is inconclusive or futile. This is particularly evident for hard and extra level InterCode- SQL task instructions that require context spanning across several tables and actions that incorporate multiple clauses. We note that even when the full schema of all tables and their descriptions are offered in addition to the original instructions, models still benefit greatly from using interaction to experiment with different JOIN and filtering operators across multiple turns, as demonstrated in § B.2. A larger context window size, retrieval of useful memory, and more adaptive reasoning paradigms are just a handful of potential solutions to overcoming such challenges. # 5.2 Prompting strategy comparison Initiating language agents with prompting strategies that encourage different forms of reasoning toward problem-solving improves performance on the interactive coding task to varying degrees. Table 4 presents side-by-side comparisons of the success rate, number of turns, and error rate per strategy. Compared to Try Again, which lacks specific guidance on leveraging multiple turns, more 7 # Success Rate
2306.14898#30
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
31
(a) Random Set. Table 3: Zero-shot object hallucination evaluation on POPE [19]. Objects not existing in the image are sampled with three different strategies. Random: random sampling, Popular: top-k most frequent objects in MS-COCO, Adversial: objects are first ranked based on co-occurring frequencies, then top-k frequent ones are sampled. Ours-7B means Finetuned mPLUG-Owl-7B. or images that are not from the Visual Genome dataset, we use the groundtruth answers as a reference and compare them with predictions (Fig. 7 in the appendix). 6 Experiment 6.1 Baselines. We evaluate the zero-shot performance of 5 recently released LMMs: (1) MiniGPT4; (2) LLaVA; (3) InstructBLIP; (4) Multimodal-GPT (MMGPT); (5) mPLUG-Owl. All models above have been tuned on their collected visual instruction data. # Implementation Setup
2306.14565#31
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]
2306.14824
31
Results Table 3 reports referring comprehension results on RefCOCO [YPY+16], Ref- COCO+ [YPY+16] and RefCOCOg [MHT+15]. KOSMOS-2 also obtains promising zero-shot performance on the comprehension task, significantly outperforming previous zero-shot models on RefCOCOg benchmark. However, compared to previous finetuned works, KOSMOS-2 achieves slightly lower performance on RefCOCO and RefCOCO+ than on RefCOCOg. This discrepancy can be attributed to the data distribution present in RefCOCO and RefCOCO+, where they tend to use a shorter referring expression (e.g., “left bottom”) during the two-player game. Hence, one of our future goals is to enhance MLLMs’ ability to accurately understand more types of human expressions. # 4.2 Multimodal Referring In addition to multimodal grounding tasks, we evaluate the model’s ability to understand image regions or objects users refer to via inputting bounding boxes. Compared with previous multimodal 8
2306.14824#31
Kosmos-2: Grounding Multimodal Large Language Models to the World
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
http://arxiv.org/pdf/2306.14824
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
cs.CL, cs.CV
20 pages
null
cs.CL
20230626
20230713
[ { "id": "2301.13688" }, { "id": "2210.08402" }, { "id": "2304.08485" }, { "id": "1905.00537" } ]
2306.14898
31
7 # Success Rate \ ' “ —®- chat-bison-001 45 b - -@- gpt-3.5 —e- gpt-4 : col gpt I 40} i “-@- starchat —@= chat-bison-001 —®- text-bison-001 _ @ opt 3.5 —e- text-davinci-003 Oe gpt4 ~ vicuna -@ starchat oe —@- text-bison-001 _ —@ text-davinci-003 -@- vicuna w a T w 6 T Success Rate N a N 6 2 4 6 8 10 2 4 6 8 10 Number of Interaction Turns Number of Interaction Turns (a) Success rate vs. turns for InterCode-Bash (b) Success rate vs. turns for InterCode-SQL (a) Success rate vs. turns for InterCode-Bash (b) Success rate vs. turns for InterCode-SQL Figure 3: Growth in Success Rate with increase in number of interaction turns across models configured with Try Again prompting strategy for InterCode-Bash and SQL tasks.
2306.14898#31
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. Project site with code and data: https://intercode-benchmark.github.io
http://arxiv.org/pdf/2306.14898
John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao
cs.CL, cs.LG, cs.SE
Project site with code and data: https://intercode-benchmark.github.io
null
cs.CL
20230626
20231030
[ { "id": "2304.05128" }, { "id": "2207.10397" } ]
2306.14565
32
# Implementation Setup Training Details. As for MiniGPT4, we initialize from its checkpoint of the first pretraining stage. Then we instruct-tune the model on LRV-Instruction with the linear projection layer as the only learnable module. As for mPLUG-Owl, we train the text encoder by LoRA training. Additionally, we only replace the LLaVA dataset in their finetuning data with LRV-Instruction to make a fair comparison with the original Mplug-Owl. We utilize MiniGPT4-7B and mPLUG-Owl-7B since we don’t have the computing resources to finetune the 13B models. We trained our models on NVIDIA Quadro RTX 8000. As for the hyper-parameters, please refer to [44; 42]. Evaluation Benchmarks. Apart from our proposed evaluation set, we evaluate LMMs on three public benchmarks. MME [9] is a human-annotated benchmark, measuring perception and cognition abilities on 14 subtasks. POPE [19] is a recently released dataset to evaluate object hallucination. GQA dataset [12] is a public visual question-answer dataset with open-ended questions. # 6.2 Main Results
2306.14565#32
Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
http://arxiv.org/pdf/2306.14565
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang
cs.CV, cs.AI, cs.CE, cs.CL, cs.MM
40 pages, 32 figures. Under Review
null
cs.CV
20230626
20230929
[ { "id": "2307.05052" }, { "id": "2302.13971" }, { "id": "2307.05356" }, { "id": "2306.14565" }, { "id": "2306.13394" }, { "id": "2304.14178" }, { "id": "2305.10355" }, { "id": "2212.00280" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2205.14100" }, { "id": "1809.02156" }, { "id": "2306.06306" }, { "id": "2304.10592" }, { "id": "2301.12597" }, { "id": "2303.18223" }, { "id": "2010.03743" }, { "id": "2303.16634" }, { "id": "2212.10560" }, { "id": "2302.04023" }, { "id": "1908.03557" }, { "id": "2305.03726" }, { "id": "1907.11692" }, { "id": "2103.11943" }, { "id": "2303.15056" }, { "id": "2305.06500" } ]