doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.02490 | 80 | SUPER MARIO ODYSSEY THE LEGEND OF ZELDA BREATH OF WILD Options Start Required capabilities: OCR (d) Q: In which country was this photo taken? GT: australia Required capabilities: Recognition, knowledge (e) Q: Which are producers in this food Ry / \ web? = GT: Phytoplankton Seaweed spatial awareness oe on ao Required capabilities: OCR, knowledge, aotiien (f) Q: Who is the person in the image? GT: Michelle Yeoh Required capabilities: Recognition, knowledge (g) Q: Where is this photo taken? GT: oscar 2023 Required capabilities: Recognition, knowledge GPT-4V: The scene texts in the image are: - "MR. WILLIAM SHAKESPEARES" - âCOMEDIES, HISTORIES, & TRAGEDIES." - "Published according to the True Originall Copies." GPT4_score: 0.9 GPT-4V: The scene texts in the image are "Mario Kart 8 Deluxe", "5:30 PM", "88%", "Options", "Start", "MARIOKART 8 DELUXE", "THE LEGEND OF ZELDA", and "FORTNITE". | 2308.02490#80 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 81 | PM", "88%", "Options", "Start", "MARIOKART 8 DELUXE", "THE LEGEND OF ZELDA", and "FORTNITE". GPT4_score: 0.7 GPT-4V: This photo was taken in Australia, during the Australian Open tennis tournament. GPT4_score: 1.0 GPT-4V: The producers in this food web are phytoplankton and seaweed. GPT4_score: 1.0 GPT-4V: The person in the image is Michelle Yeoh, a renowned actress and producer. GPT4_score: 1.0 GPT-4V: This photo was taken at the Oscars, where the woman is receiving an award. GPT4_score: 0.5 | 2308.02490#81 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 83 | (a) Q: What occasions would someone use this meme? GT: This meme, commonly known as "Screaming Panda," is typically used to express shock, surprise, or fear. It could be used in response to a startling or unexpected event, or to convey a sense of panic or alarm. . Required capabilities: Recognition, knowledge, language generation Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now (b) Q: Can you explain this meme? GT: This meme is a humorous take on procrastination and the tendency to delay | tasks until a specific time. The person in | the meme plans to do something at 8 i o'clock, but when they miss that deadline by a few minutes, they decide to wait until 9 o'clock instead. The image of Kermit the Frog lying in bed represents the person's laziness and lack of motivation to complete the task. Required capabilities: Recognition, knowledge, language generation, OCR = (c) Q: The table below gives information â â 7: = about the underground railway systems in ase 1900, a9 iat six cities. Summarise the information by I = = selecting and reporting the main features, wee | 2308.02490#83 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 84 | = about the underground railway systems in ase 1900, a9 iat six cities. Summarise the information by I = = selecting and reporting the main features, wee eer = = and make comparisons where relevant. cal EEL = = You should write at least 150 words. GT: The table shows data about the underground rail networks in six major cities. The table compares the six networks in terms of their age, size and the number of people who use them each year. It is clear that the three oldest underground systems are larger and serve significantly more passengers than the newer systems. The London underground is the oldest system, having opened in 1863. It is also the largest system, with 394 kilometres of route. The second largest system, in Paris, is only about half the size of the London underground, with 199 kilometres of route. However, it serves more people per year. While only third in terms of size, the Tokyo system is easily the most used, with 1927 million passengers per year. Of the three newer networks, the Washington DC underground is the most extensive, with 126 kilometres of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened | 2308.02490#84 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 85 | of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened in 2001, while the Kyoto network is the smallest and serves only 45 million passengers per year. Required capabilities: OCR, language generation, spatial awareness GPT-4V: This meme is often used to express shock, surprise, or disbelief in a humorous or exaggerated way. It can be used in response to unexpected news, events, or situations. GPT4_score: 0.6 GPT-4V: This meme humorously illustrates the tendency to procrastinate. The person plans to do something at 8 but delays it by 5 minutes and then decides to postpone the task until 9, exemplifying how easily tasks can be deferred. The image of Kermit the Frog relaxed and unbothered underscores the nonchalant attitude towards procrastination. GPT4_score: 1.0 GPT-4V: The table illustrates data on six underground railway systems worldwide, focusing on their opening dates, route lengths, and annual passenger numbers. Londonâs underground, inaugurated in 1863, is the | 2308.02490#85 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 86 | systems worldwide, focusing on their opening dates, route lengths, and annual passenger numbers. Londonâs underground, inaugurated in 1863, is the oldest and has the most extensive network, boasting 394 kilometres of route. Despite its age and extensive network, it only caters to 775 million passengers annually, significantly less than Paris and Tokyo. Parisâs metro, opened in 1900, has a 199-kilometre route and serves a staggering 1191 million passengers yearly, the highest among the listed cities. Tokyoâs system, though younger (opened in 1927) and having a shorter 155-kilometre route, still manages to serve 1927 million passengers, showcasing the cityâs dense population and reliance on public transport. In contrast, Washington DCâs metro, opened in 1976 with 126 kilometres of route, only caters to 144 million passengers annually. Kyoto and Los Angeles have the youngest and least extensive networks, opened in 1981 and 2001 with 11 and 28 kilometres of routes, respectively. Correspondingly, they have the lowest annual passenger numbers, with Kyoto at 45 million and Los Angeles at 50 million. In summary, there is a noticeable | 2308.02490#86 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 87 | Correspondingly, they have the lowest annual passenger numbers, with Kyoto at 45 million and Los Angeles at 50 million. In summary, there is a noticeable disparity in the age, route length, and passenger numbers among these cities. While older and more extensive networks like Londonâs do not necessarily translate to higher passenger numbers, cities like Paris and Tokyo demonstrate a significant demand for underground rail systems, likely attributed to their dense populations and urban structures. GPT4_score: 0.9 | 2308.02490#87 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 88 | = â â 7: = ase 1900, a9 iat I = = wee eer = = cal EEL = =
Figure 6: GPT-4V prediction examples
22
# References
[1] Chatgpt can now see, hear, and speak. https://openai.com/blog/ chatgpt-can-now-see-hear-and-speak, 2023.
[2] Mpt. https://github.com/mosaicml/llm-foundry#mpt, 2023.
[3] Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. Nocaps: Novel object captioning at scale. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8948â8957, 2019.
[4] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â23736, 2022. | 2308.02490#88 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 89 | [5] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
[6] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, 2015.
[7] Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, et al. Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing. arXiv preprint arXiv:2110.07205, 2021. | 2308.02490#89 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 90 | [8] Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023.
[9] Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo, March 2023.
[10] Microsoft Azure. Azure cognitive services apis. https://azure.microsoft.com/en-us/products/ ai-services/ai-vision, 2023. Accessed: 2023-06-20. | 2308.02490#90 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 91 | [11] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[12] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020.
[13] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. | 2308.02490#91 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 92 | [14] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022.
[15] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558â3568, 2021.
[16] Ting Chen, Saurabh Saxena, Lala Li, David J Fleet, and Geoffrey Hinton. Pix2seq: A language modeling framework for object detection. In ICLR, 2022.
[17] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. | 2308.02490#92 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 93 | [18] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. In ECCV, 2020.
[19] Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937, 2023.
23
[20] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
[21] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. | 2308.02490#93 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 94 | [22] Marta R Costa-jussà , James Cross, Onur Ãelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672, 2022.
[23] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
[24] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. | 2308.02490#94 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 95 | [25] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023.
[26] Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19358â19369, 2023. | 2308.02490#95 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 96 | [27] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.
[28] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023.
[29] Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, and Jianfeng Gao. Vision-language pre-training: Basics, recent advances, and future trends. arXiv preprint arXiv:2210.09263, 2022.
[30] Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn. arXiv preprint arXiv:2306.08640, 2023. | 2308.02490#96 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 97 | [31] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
[32] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023.
[33] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023.
[34] Google. Bard. https://bard.google.com, 2023. Accessed: 2023-07-17. | 2308.02490#97 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 98 | [34] Google. Bard. https://bard.google.com, 2023. Accessed: 2023-07-17.
[35] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904â6913, 2017.
[36] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
[37] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
24
[38] Huggingface. Transformers agent. https://huggingface.co/docs/transformers/transformers_ agents, 2023. Accessed: 2023-07-20. | 2308.02490#98 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 99 | [39] Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In European Conference on Computer Vision, pages 498â517. Springer, 2022.
[40] Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In ICML, 2021.
[41] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017.
[42] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019. | 2308.02490#99 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 100 | [43] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning. arXiv preprint arXiv:2306.05425, 2023.
[44] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023.
[45] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension, 2023.
[46] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. | 2308.02490#100 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 101 | [47] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training In International Conference on Machine for unified vision-language understanding and generation. Learning, pages 12888â12900. PMLR, 2022.
[48] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, 2020.
[49] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[50] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023. | 2308.02490#101 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 102 | [51] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[52] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[53] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023.
[54] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023. | 2308.02490#102 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 103 | [55] Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of ocr in large multimodal models. arXiv preprint arXiv:2305.07895, 2023.
[56] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019.
[57] Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086â7096, 2022.
[58] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In CVPR, pages 3195â3204, 2019.
25
[59] OpenAI. Gpt-4 technical report, 2023.
[60] OpenAI. Gpt-4v(ision) system card. 2023. | 2308.02490#103 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 104 | 25
[59] OpenAI. Gpt-4 technical report, 2023.
[60] OpenAI. Gpt-4v(ision) system card. 2023.
[61] OpenAI. Gpt-4v(ision) technical work and authors. 2023.
[62] Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS, 2011.
[63] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[64] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
[65] Thomas Politzer. Vision is our dominant sense. https://www.brainline.org/article/ vision-our-dominant-sense. Accessed: 2023-05-20. | 2308.02490#104 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 105 | [66] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
[67] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492â28518. PMLR, 2023.
[68] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
[69] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018. | 2308.02490#105 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 106 | [70] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023.
[71] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In ECCV, pages 742â758, 2020.
[72] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317â8326, 2019.
[73] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2308.02490#106 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 107 | [74] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[75] Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. arXiv preprint arXiv:2106.13884, 2021.
[76] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022. | 2308.02490#107 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 108 | [77] Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. arXiv preprint arXiv:2305.11175, 2023.
[78] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097â2106, 2017.
[79] Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. In ICLR, 2022.
26 | 2308.02490#108 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 109 | 26
[80] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023.
[81] Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441, 2023.
[82] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. Unitab: Unifying text and box outputs for grounded vision-language modeling. In European Conference on Computer Vision, pages 521â539. Springer, 2022. | 2308.02490#109 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 110 | [83] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3081â3089, 2022.
[84] Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 2023.
[85] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023. | 2308.02490#110 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 111 | [86] Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, and Jiebo Luo. Tap: Text-aware pre-training for text-vqa and text-caption. In CVPR, pages 8751â8761, 2021.
[87] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
[88] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In CVPR, pages 6720â6731, 2019. | 2308.02490#111 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 112 | [89] Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022.
[90] Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023.
[91] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. | 2308.02490#112 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.02490 | 113 | [92] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Judging Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. llm-as-a-judge with mt-bench and chatbot arena, 2023.
[93] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
[94] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023.
27 | 2308.02490#113 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | [
{
"id": "2302.13971"
},
{
"id": "2106.13884"
},
{
"id": "2306.05425"
},
{
"id": "2204.02311"
},
{
"id": "2309.17421"
},
{
"id": "2306.08640"
},
{
"id": "2305.10403"
},
{
"id": "2302.04166"
},
{
"id": "2203.15556"
},
{
"id": "2210.09263"
},
{
"id": "2306.14565"
},
{
"id": "2103.00020"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2204.00598"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2010.11929"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2304.08485"
},
{
"id": "2205.14100"
},
{
"id": "2307.06281"
},
{
"id": "1910.13461"
},
{
"id": "2207.04672"
},
{
"id": "2303.11381"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2211.10435"
},
{
"id": "2305.07895"
},
{
"id": "2307.02469"
},
{
"id": "2308.01390"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
},
{
"id": "2310.11441"
},
{
"id": "2306.09265"
},
{
"id": "2303.17580"
},
{
"id": "2305.03726"
},
{
"id": "2304.03277"
},
{
"id": "2303.12712"
},
{
"id": "2304.06939"
},
{
"id": "2110.07205"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2308.01542 | 0 | 3 2 0 2
g u A 3 ] C H . s c [
1 v 2 4 5 1 0 . 8 0 3 2 : v i X r a
# Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents
# Ziheng Huang [email protected] University of CaliforniaâSan Diego San Diego, CA, USA
Sebastian Gutierrez [email protected] Temple University Philadelphia, PA, USA
Hemanth Kamana [email protected] Temple University Philadelphia, PA, USA
# Stephen MacNeil [email protected] Temple University Philadelphia, PA, USA | 2308.01542#0 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 0 | 3 2 0 2
g u A 3 ] I A . s c [ 1 v 2 5 5 1 0 . 8 0 3 2 : v i X r a
# InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent
Po-Lin Chen and Cheng-Shang Chang, Fellow, IEEE
AbstractâThis research paper delves into the integration of OpenAIâs ChatGPT into embodied agent systems, evaluating its influence on interactive decision-making benchmark. Drawing a parallel to the concept of people assuming roles according to their unique strengths, we introduce InterAct. In this approach, we feed ChatGPT with varied prompts, assigning it a numerous roles like a checker and a sorter, then integrating them with the original language model. Our research shows a remarkable success rate of 98% in AlfWorld, which consists of 6 different tasks in a simulated household environment, emphasizing the significance of proficient prompt engineering. The results high- light ChatGPTâs competence in comprehending and performing intricate tasks effectively in real-world settings, thus paving the way for further advancements in task planning.
Keywords: ChatGPT, AlfWorld, Task planning, InterAct.
# I. INTRODUCTION | 2308.01552#0 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 1 | # Stephen MacNeil [email protected] Temple University Philadelphia, PA, USA
ABSTRACT The recent advent of large language models (LLM) has resulted in high-performing conversational agents such as chatGPT. These agents must remember key information from an ongoing conversa- tion to provide responses that are contextually relevant to the user. However, these agents have limited memory and can be distracted by irrelevant parts of the conversation. While many strategies exist to manage conversational memory, users currently lack affordances for viewing and controlling what the agent remembers, resulting in a poor mental model and conversational breakdowns. In this paper, we present Memory Sandbox, an interactive system and design probe that allows users to manage the conversational memory of LLM-powered agents. By treating memories as data objects that can be viewed, manipulated, recorded, summarized, and shared across conversations, Memory Sandbox provides interaction affordances for users to manage how the agent should âseeâ the conversation.
CCS CONCEPTS ⢠Computing methodologies â Intelligent agents; ⢠Human- centered computing â Interactive systems and tools;
# KEYWORDS Human-AI Interaction, Large Language Models, Chatbots | 2308.01542#1 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 1 | Keywords: ChatGPT, AlfWorld, Task planning, InterAct.
# I. INTRODUCTION
The advent of large language models (LLMs), underpinned by transformative advancements in natural language process- ing (NLP), has stimulated a revolution across a wide range of applications. Exemplified by models such as Transformer [1], T5 [2], GPT-4 [3], these language models have achieved impressive results in diverse tasks like paragraph summary, language translation, and code optimization. These achieve- ments can be attributed to their ability to absorb and process massive amounts of data, making sense of the patterns and structures within the text. | 2308.01552#1 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 2 | # KEYWORDS Human-AI Interaction, Large Language Models, Chatbots
Multiple strategies have been introduced to manage agentsâ conversational memory. For example, the conversation can be automatically summarized [21] and refined [24] to reduce redun- dancy while maintaining key information. Some systems selectively store [12, 22] and update [1] key memories. Relevant memories can also be retrieved based on the user input [1, 15, 21]. However, these memory management strategies are hidden behind the interface, resulting in a lack of transparency. Users often do not know what strategy is being used and have limited control over it. This makes it difficult for users to repair conversational breakdowns that happen when there is a misalignment between how the agent manages the memory and how the user perceives the conversation. | 2308.01542#2 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 2 | ChatGPT [4] is an AI language model created by OpenAI, which has been trained using a combination of pretraining and fine-tuning with human feedback. This advanced model is built on Transformer model, enabling it to produce responses that closely resemble human language. By undergoing exten- sive training on vast volumes of text data, ChatGPT excels in understanding and generating text in various languages and fields, answering queries, and engaging in dialogues. Unlike its predecessors that operate primarily based on a single prompt, ChatGPT combines text generation with code syn- thesis, thereby significantly enhancing its interactive abilities. In this paper, we assess the ability of ChatGPT to make decisions within the context of an AlfWorld simulated envi- ronment [5]. The aim is to understand the modelâs proficiency in absorbing and processing data to make rational decisions. | 2308.01552#2 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 3 | We present Memory sandbox, shown in Figure 1, a system that allows users to see and manage the memory of conversational agents to align with user understanding of the conversation. Mem- ory Sandbox transforms conversational memory, previously man- aged behind the user interface, into interactive memory objects within the interface. Users can manipulate the visibility and con- tent of memory objects, spatially rearrange them, and share them across conversations. We make the following contributions: 1) The conceptualization of memory objects which makes conversational memory transparent and interactive and 2) The Memory Sandbox system that offers novel interaction affordances for users to view and manipulate the conversational memory of an intelligent agent. | 2308.01542#3 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 3 | Scholarly works such as ReAct [6] and Reflexion [7] showcase the decision-making, action-initiation, and reflective powers of LLMs, paving the way for remarkable progress in a range of text-based performance metrics. However, they all utilize a single language model (InstructGPT) which, despite numerous iterations of thought and reflection, often repeatedly commits the same mistakes. In this research, we devise a novel model, InterAct, which is founded on the architecture of the ReAct model [6]. It undergoes alterations in prompt formulations, incorporates different ChatGPT for support. In particular, we add a checker module to tackle the issue of object misidentifi- cation. The initial basic prompt has also been revised to bolster InterActâs capabilities in constructing comprehensive search paths. This approach effectively addresses the previously men- tioned shortcomings of the ReAct model. Consequently, this approach yielded a success rate of 98% in this benchmark, a significant improvement from the base ReAct agentâs accuracy of 75%. These experiments provide critical insights into the potential benefits and limitations of implementing ChatGPT in AI-driven systems and technologies. | 2308.01552#3 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 4 | 1 INTRODUCTION Large Language Models (LLMs) are currently capable of generating human-like responses in open-domain tasks [4]. This has led to a new generation of conversational agents, such as chatGPT, which are now being widely used across domains. To ensure that agents generate responses that are contextually relevant and coherent to an ongoing conversation, these agents must maintain a working memory of the conversational history that has occurred up to that point in the conversation. The default strategy is to use as much of the conversational history as will fit within the input size limit of the LLM. Parts of the conversations that go beyond that buffer limit are forgotten, which leads to breakdowns when users assume the model remembers past context. Additionally, as the input buffer size increases, the performance of the LLM degrades as it struggles to retrieve relevant context and can be distracted by irrelevant context [11, 18]. This problem is compounded because users do not know how the LLM is leveraging the memory to generate responses. | 2308.01542#4 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 4 | In conclusion, the main insight of the paper is the advance- ment of AI language models like ChatGPT presents an excit- ing opportunity to revolutionize and reshape our interaction with technology. By leveraging these models, we can build more intuitive, responsive, and smart technologies that can effectively understand and respond to human requirements. The key contributions of our research are summarized below: (1) We introduce InterAct, an improved method where each agent, like ChatGPT, can showcase unique abilities, adeptly rectifying the limitations found in the ReAct model, such as object misidentification and inefficient planning.
(2) We have designed new trajectory prompts that enable the agent to flawlessly locate items during its search process. In a decision-making test within the AlfWorld sim- ulated environment, InterAct demonstrated a 98% success rate, significantly higher than the 75% accu- racy of the base ReAct agent, suggesting its potential benefits in AI-centric systems and technologies.
The authors are with the Institute of Communications Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan R.O.C. Email: [email protected]; [email protected]. | 2308.01552#4 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 5 | 2 SYSTEM OVERVIEW Memory sandbox is a system that provides users with the ability to view and manipulate the memory model of an intelligent agent, resulting in a shared representation of their ongoing conversation. Memory Sandbox introduces the concept of a memory object, an in- teractive piece of conversational history that can be moved, edited, deleted, or combined with other memory objects through sum- marization. The interface is implemented in Next.js and uses the GPT-3.5 turbo model from the OpenAI API. Below we present the features of Memory Sandbox to help end users view and manage an LLM-powered agentâs memory model.
2.1 View and manipulate memory objects Explainable AI research seeks to help people form mental models of intelligent systems [17]. Transparency of the inner workings of the system [6, 23] and interactivity to probe and manipulate the
Huang, et al. | 2308.01542#5 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 5 | This work was supported in part by the National Science and Technology, Taiwan, under Grant 111-2221-E-007-045-MY3, and in part by Qualcomm Technologies under Grant SOW NAT-487844-2.
# II. RELATED WORK
Transformers have emerged as the dominant architecture in various fields. Initially prominent they have now extended their influence to include vision-based tasks [11],
[12] and even reinforcement learning [13], [14]. In the realm of robotics, Transformers have found practical applications in diverse areas such as path planning [15], [16], object recognition [17], and grasping [18]. | 2308.01552#5 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 6 | Huang, et al.
Conversation 1 Conversation 2 assistant _ hello thore! im a design process chatbot that can guide assistant _ hello there! im a systems thinking chatbot that help you > = youthrough the design process! how can | assist you seo ¢ today? © Selected Memory For Summary ® @ user im working on de quality online e ing design solutions for high think through the interconnected stakeholders! how can ees * ©â Gj assist you today? @ summary _ stakeholders in online education include students, teachers, administrators, parents, support staff, set H © © G technical teams, and regulatory bodies. summary _ stakeholders in online education include students, = og = teachers, administrators, parents, support staff, Draggable S © 6S technical teams, and regulatory bodies. emer be ae user before going into specific stakeholders, what are some 5 4 common obstacles ofall these stakeholders ? s@é user what should be my next steps ? set
Figure 1: Memory Sandbox is a system that enables users to see and manage the memory of conversational agents. Memory Sandbox provides the following interaction affordances: 1) toggle memory visibility, 2) add memory, 3) edit memory, 4) delete memory, 5) summarize memory, 6) create a new conversation, and 7) share memory.
system [16] have been demonstrated to help people interpret and interact with intelligent systems to achieve their goals. | 2308.01542#6 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 6 | One notable example is RT-1 [19], which takes the uti- lization of Transformers that takes images from a robotâs camera and natural language task instructions as inputs and directly outputs tokenized actions. RT-1 can also acquire new skills by observing other robotsâ experiences, opening opportunities for enhanced robot capabilities through multi- robot datasets. Another instance is SayCan [20], a study conducted by Googleâs AI team and Everyday Robots. This research employs PaLM [21] and an affordance function to empower robots to carry out complex tasks based on natural language instructions. The resulting system, PaLM-SayCan, transforms user instructions into actionable plans for the robot. Inner Monologue [22] has made further advancements by incorporating injected feedback from the environment. The work in [23] demonstrated that even without any training, siz- able language models can be effectively prompted to produce credible action plans driven by goals. They also suggested multiple techniques to enhance the modelâs ability to generate executable outputs, all without the need for invasive probing or modifications to the underlying model. | 2308.01552#6 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 7 | system [16] have been demonstrated to help people interpret and interact with intelligent systems to achieve their goals.
Memory Sandbox makes the conversational memory explicit through the use of âmemory objectsâ which can be viewed and manipulated within the interface. This was inspired by prior work that âobjectifiesâ tools [2, 3] and attributes [20] to enable flexibility, expressiveness, and direct manipulation. This results in a âshared representationâ [7, 8] and common ground [5]âso what users see on the front-end is what an LLM would âseeâ on the back-end.
Additionally, users can view, edit, add, and delete memory objects
to directly control how the agent âseesâ the conversation. | 2308.01542#7 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 7 | GPT for Robotics Moreover, recent publications, includ- ing [24], [25], and [26], have successfully incorporated models such as ChatGPT and GPT3.5 into the realm of robotics ap- plications. These advancements facilitate interaction between the models and the environment or users, allowing for the correction of the robotâs behavior. These papers showcase various prompts and outline a pipeline for the implementation they conduct of ChatGPT in robotics tasks. Additionally, experimental evaluations to assess ChatGPTâs capability to execute a wide range of robotics tasks while striving to bridge the gap between natural language and actionable robot actions. The process of reasoning in robotics involves breaking down complex tasks into simpler subtasks that can be more easily solved by the LLM itself or with the aid of tools. Various approaches [27], [28] have been introduced to enable natural language agents to select their next action in text-based environments. | 2308.01552#7 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 8 | Additionally, users can view, edit, add, and delete memory objects
to directly control how the agent âseesâ the conversation.
2.2 Toggle memory object visibility As a conversation grows, LLMs must increasingly rely on their memory management strategy to infer meaning from the conversa- tion. However, in longer conversations, it is unclear what parts of the conversation are stored in memory or are attended to by the model [11]. This results in a poor mental model for users and a lack of control over what context is maintained and used by the agent. Memory Sandbox enables users to selectively hide or show mem- ory objects to control what context is shared with the agent. When the userâs intent changes or the conversational context switches, the user can toggle the visibility of memory objects to hide or show parts of the conversation. As a signifier, hidden memory objects are grayed out within the interface.
supplement extra context. Additionally, the arrangement of context is shown to have a significant effect on how well LLMs are able to leverage relevant context [11]. In Memory Sandbox, all the memory objects are draggable, allowing users to experiment and refine the ordering and placement of memory objects in a conversation. | 2308.01542#8 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 8 | One prominent approach is Chain-of-thought (CoT) reason- ing, as proposed in [29]. This approach leverages emergent properties, such as reasoning and commonsense, to solve tasks through multiple steps. It enables the LLM to reason through a series of intermediate actions, leading to the desired outcome. Another approach called faithful reasoning, introduced in [30], decomposes multi-step reasoning into three distinct steps, each handled by a dedicated LLM. By dividing the task into these steps, faithful reasoning facilitates the LLMâs ability to tackle complex computations effectively. Similar approaches like Scratchpad [31], which involves fine-tuning an LLM on intermediate computation steps, resulting in improved performance on multi-step computation problems. The Describe, Explain, Plan, and Select (DEPS) approach, introduced in [32], specifically developed to tackle the unique
challenges of planning in open-ended environments such as Minecraft. This innovative system adeptly manages intricate tasks that demand meticulous, multi-step reasoning, effec- tively prioritizing sub-goals according to the agentâs prox- imity. Notably, DEPS has exhibited remarkable results in enhancing the success rate of Minecraft tasks by offering insightful explanations for errors encountered during sub-task execution. As a groundbreaking planning agent, DEPS has achieved an unprecedented positive success rate in conquering the formidable ObtainDiamond task, marking a significant milestone in the field. | 2308.01552#8 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 9 | 2.4 Summarize memory objects Reminiscent of how humans attend to key aspects in a conver- sation [14], abstractive summarization distills a large amount of information to provide essential elements to the agent. Yet, what is considered as âkey aspectsâ can vary for individuals, even in the same conversation [14]. Memory Sandbox enables uses to select memory objects that are summarized by the LLM. The resulting memory object represents the previous conversation and can be further refined by the user. The original conversation can be viewed by clicking on the summary.
2.5 Share memory objects across conversations Aligning with the goal of managing memory, Memory Sandbox also provides affordances for sharing memories across conversations. This offers a new way for users to engage with multiple agents outside of a single conversation thread. Unlike in conversations with people, the speaker doesnât need to repeat themselves in each conversation to establish a shared understanding.
2.3 Curate memory objects Discussants develop and refine their understanding as a conversa- tion unfolds [5]. Thus, Memory Sandbox provides controls for users to curate memory objects by editing an existing memory object to refine or update the context, deleting a memory object to remove completely irrelevant context, and adding a new memory object to | 2308.01542#9 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 9 | A different strategy called DERA [33] presents an alter- native approach by structuring a dialogue as a conversation between two agent types: âResearcherâ and âDecider.â The Researcher agent analyzes information and identifies key com- ponents of the problem, while the Decider agent autonomously combines the Researcherâs insights and makes judgments on the final output. This approach has demonstrated notable enhancements compared to the baseline performance of GPT- 4 [3] in evaluations conducted by human experts and quanti- tative metrics. Particularly, DERA has showcased significant advancements in safety-critical domains like healthcare.
Additionally, the studies by [7], [34] have also incorporated reflection actions into the model. These reflection actions allow the model to refine its actions based on feedback received during the execution of tasks. By iteratively adjusting its actions and incorporating self-feedback, the model can improve its decision-making process and adapt to changing conditions.
Our research aims to provide additional evidence supporting the effectiveness of ChatGPT in language-conditioned robotic learning simultaneously introducing novel architectures that facilitate reasoning through the coordination of various roles performed by LLMs.
# III. METHOD: INTERACT STRUCTURE | 2308.01552#9 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 10 | Users can create and start multiple conversations with separate LLM-powered agents in the same 2D canvas. Memory objects can be shared and connected between conversations by dragging the memory object from one conversation to another. When dragging, memories are copied by reference to help the user identify the context source.
Memory Sandbox
3 DISCUSSION Conversing is a collaborative activity where participants develop common ground through summarizing the discussion, repairing breakdowns, and emphasizing or de-emphasizing shared ideas [5]. Yet, existing chatbot interfaces do not provide affordances for under- standing how the agent âseesâ the conversation. Additionally, users can not rely on a theory of mind. These aspects result in a poor men- tal model for users and potential misalignment in understanding where conversational breakdown can occur. | 2308.01542#10 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 10 | # III. METHOD: INTERACT STRUCTURE
In this section, we use the AlfWorld benchmark to test ChatGPTâs reasoning capabilities, examining how it accom- plishes household tasks step by step when provided only with a few-shot example. We will use not only ChatGPT but also a similar language model called InstructGPT (text-davinci- 002). InstructGPT is particularly adept at tasks demanding succinct responses or benefiting from k-shot examples. In this particular task, unlike the previous demostration, the model is required to integrate task-oriented actions with verbal reasoning. The model needs to possess the ability to think and reason like a human. When faced with dead ends , the model should be capable of adjusting its planning based on logical reasoning.
A. AlfWorld Dataset
AlfWorld is a suite of text-based environments that chal- to solve multi-step tasks in a variety of lenge an agent interactive environments with ALFRED [35] benchmark. The ALFRED benchmark focuses on tasks that require an agent to accomplish high-level goals in a simulated household
environment by navigating and interacting through text-based actions. In AlfWorld, there are six types of tasks that chal- lenge the agentâs ability to plan, track subgoals, and explore systematically. | 2308.01552#10 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 11 | Memory Sandbox transforms previously implicitly managed con- versational memory behind the interface into interactive memory objects on the interface, exposing full control over the memory model of the agent to end users. By selectively hiding, showing, and curating memory representation, we can give users more con- trol over how the agent should âseeâ the conversation. In addition to curating memory in a single conversation, Memory Sandbox is also a design probe toward memory manipulation affordances for multi-agent interactions. By displaying multiple agents on the same screen and making memories interactive and draggable, Mem- ory Sandbox allows end users to selectively control the shared or unique memory each agent contains.
Tools are beginning to emerge that focus on how users might in- teract with LLMs, including mapping UI affordances to an LLM [13], grounding human-AI collaboration in a shared artifact [9], provid- ing templates to facilitate prompt generation [10], and decomposing complex prompts to facilitate debugging [19]. In this paper, we pre- sented Memory Sandbox an interactive system that probes the design space of interaction techniques for memory management of LLMs. Our future work includes user studies to evaluate the efficacy of these techniques and potential trade-offs for implicit vs explicit memory management | 2308.01542#11 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 11 | For example, a task in AlfWorld could be to âexamine a paper under a desklamp.â To achieve this goal, the agent needs to navigate to specific locations within the simulated household and interact with objects using text commands. The agent might need to issue commands like âgo to coffeetable 1,â âtake paper 2,â and âuse desklamp 1â to complete the task. The complexity of the tasks in AlfWorld is intentionally designed to be challenging. Task instances can have more than 50 locations and may require an expert policy more than 50 steps to solve. This complexity encourages the agent to effectively plan its actions, keep track of subgoals, and explore the environment systematically. For example, the agent may need to check all desks one by one to find the desklamp.
One of the challenges presented in AlfWorld is the need to determine likely locations for common household items. For instance, a desklamp is likely to be found on desks, shelves, or dressers. This aspect of the environment provides an opportunity for language models like LLMs to leverage their pretrained commonsense knowledge to make informed decisions about the likely locations of objects. | 2308.01552#11 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 12 | REFERENCES [1] Sanghwan Bae, Donghyun Kwak, Soyoung Kang, Min Young Lee, Sungdong Kim, Yuin Jeong, Hyeri Kim, Sang-Woo Lee, Woomyoung Park, and Nako Sung. 2022. Keep me updated! memory management in long-term conversations. arXiv preprint arXiv:2210.08750 (2022).
[2] Benjamin B Bederson, James D Hollan, Allison Druin, Jason Stewart, David Rogers, and David Proft. 1996. Local tools: An alternative to tool palettes. In Proceedings of the 9th annual ACM symposium on User interface software and technology. 169â170.
[3] Eric A Bier, Maureen C Stone, Ken Pier, William Buxton, and Tony D DeRose. 1993. Toolglass and magic lenses: the see-through interface. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques. 73â80. [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â1901. | 2308.01542#12 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 12 | In each environment of AlfWorld, the agent has the option to select an action from a list of permissible actions, denoted as At at time step t. Upon executing an action, the agent receives an observation, Ot, and a reward, R(st, at), from the environment, which then determines the next state of the agent.
AlfWorld offers a diverse set of six tasks and a total of over 3000 unique environments. These environments test the agentâs ability to understand the task at hand, formulate a sequential plan consisting of subtasks, and carry out the necessary actions within the given environment. In our trials, we utilize the ReAct problem-solving strategy [6], which has demonstrated superior performance across a wide array of sequential decision-making tasks. ReAct is a strategy that allows the agent to reason and act by articulating its current thoughts and performing actions based on these thoughts. At each time step, the agent has the option to execute < think >: thought action to verbalize its internal thought process, or < action >: to induce a response from the environment. The set of possible actions in each state is not explicitly defined, providing the agent with full autonomy in determining its next moves. To prevent syntactic errors, we provide the agent with two domain-specific few-shot trajectories.
# B. Model architecture | 2308.01552#12 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 13 | [5] Herbert H Clark and Edward F Schaefer. 1989. Contributing to discourse. Cogni- tive science 13, 2 (1989), 259â294.
[6] Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice.
[7]
In 23rd international conference on intelligent user interfaces. 211â223. Jeffrey Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences 116, 6 (2019), 1844â1850.
[8] Eric Horvitz. 1999. Principles of Mixed-Initiative User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, USA) (CHI â99). Association for Computing Machinery, New York, NY, USA, 159â166. https://doi.org/10.1145/302979.303030
[9] Ziheng Huang, Kexin Quan, Joel Chan, and Stephen MacNeil. 2023. CausalMap- per: Challenging designers to think in systems with Causal Maps and Large Language Model. In Proceedings of the 15th Conference on Creativity and Cognition. 325â329. | 2308.01542#13 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 13 | # B. Model architecture
We introduced a novel model called InterAct, which is built upon the foundation of ReAct. The architectural dia- gram of InterAct can be observed in Figure 1. While ReAct has demonstrated impressive accuracy in diverse decision- making and knowledge-intensive tasks, it occasionally en- counters common errors, including Perception Error, Object Misidentification, and Inefficient Planning. In simpler terms,
although ReAct achieves state-of-the-art performance overall, there exists a small subset of tasks that remain unsolved due to minor imperfections in a single model.
H= H= Hom} Sorter + (chatser) InterAct Query : un k > Environment : Reward âââââ âââ_ Action - 4 ReAct â) a checker {chatePr)
Fig. 1. The architecture of both ReAct and InterAct. InterAct involves the integration of LLM with various agents to facilitate smoother interaction with the environment.
To address these challenges, InterAct leverages the com- bined strength of agents with distinct purposes, such as is checker and sorter, susceptible to errors. In addition, we have modified the original basic prompt to enhance InterActâs ability to plan comprehensive search paths when looking for multiple items, ensuring that no possible locations are overlooked. This op- timization greatly improves the efficiency of the tasks being performed. | 2308.01552#13 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 14 | [10] Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J Cai. 2022. Promptmaker: Prompt-based prototyping with large language models. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1â8.
[11] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. arXiv:2307.03172 [cs.CL]
[12] Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, and Ji-Rong Wen. 2021. One chatbot per person: Creating personalized chatbots based on implicit user profiles. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 555â564. | 2308.01542#14 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 14 | Sorter When processing environmental data, ReAct ini- tially needs to determine the likelihood of objects appearing in specific locations. However, this ranking process often falls short, leading to less efficient planning. This inefficiency may arise from the fact that the the InstructGPT model (text- davinci-002) is not sufficiently trained in factual knowledge and common-sense reasoning. On the other hand, ChatGPT has been fine-tuned using Reinforcement Learning with Hu- man Feedback (RLHF) and has demonstrated a more nuanced understanding of various situations. It excels at making well- informed decisions, as depicted in Figure 2. To improve the ef- ficiency of predicting object locations, we integrate ChatGPT as a decision-making component. Whenever ReAct requires this procedure, it can autonomously utilize ChatGPT, thus enhancing the effectiveness of its object search operations. | 2308.01552#14 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 15 | [13] Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, and Dan Mogil. 2023. Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances. arXiv preprint arXiv:2307.01142 (2023). John C McCarthy, Victoria C Miles, and Andrew F Monk. 1991. An experimental study of common ground in text-based communication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 209â215. Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 (2023).
[14]
[15]
[16] Andrew Ross, Nina Chen, Elisa Zhao Hang, Elena L Glassman, and Finale Doshi- Velez. 2021. Evaluating the interpretability of generative models by interactive reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1â15. | 2308.01542#15 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 15 | Text-davinel-002 Chater The Ts he Fs need to finda cath, Alot re ey to appear tarbagecan (1), sinkbasin (1-2), countertop (1), tole (1) towelholer (1) bs3: sinkbasio oder 3," Tarot 2,9 "countertop 1,3 tletpa âAct 3: goto garbagecan 1 saroagecon 1 âbs 3: On the garbagecan 1, yousee nothing. âAct: go to snkbasin 2 âAct: thinks Based on the sorting provide can check cabinets (1-4), towelholder (0b 4: nthe sinkbatn 3, you see nating. (1) sinkbasn (1-2), countertop 1), toletpaperhanger (1), toll (1), garbagecan (1) âone by one to find cloth, starting with cabinet 3 Act goto fowelholder 054 0. (0bs 8: On the towelholdr 1, you see a towel. âet 5: goto cabinet Act: think |e finda eat need to get one from somewhere ele. (Obs 6: The ebinet 1s open Init, you see a loth 2 | 2308.01552#15 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 16 | [17] Heleen Rutjes, Martijn Willemsen, and Wijnand IJsselsteijn. 2019. Considerations on explainable AI and usersâ mental models. In CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI. Association for Computing Machinery, Inc.
[18] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. 2023. Large language models can be easily distracted by irrelevant context. In International Conference on Machine Learning. PMLR, 31210â31227. | 2308.01542#16 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 16 | Fig. 2. The left image was generated using text-davinci-002 for search ranking, while the right image was generated using ChatGPT. It can be observed that ChatGPT exhibits higher logical reasoning in finding objects compared to text-davinci-002.
Checker Another issue with text-davinci-002 is that it tends to mistakenly categorize similar objects as the same. For example, it might treat a pan and a pot as identical items, leading to the problem of Object Misidentification, as depicted
in Figure 3. To address this issue, we employ ChatGPT as a checker by providing it with appropriate prompts. We have observed that ChatGPT can successfully distinguish between similar objects. Furthermore, we utilize the results from this checker as observations and feed them back to the LLM, as illustrated in Figure 1. This approach helps us resolve the problem related to object misidentification.
pot 1. Next, I need to take it.
Fig. 3. Object Misidentification. In this scenario, the objective is to locate a pan; however, ReAct mistakenly misidentifies another object as the pan. | 2308.01552#16 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 17 | [19] Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems. [20] Haijun Xia, Bruno Araujo, Tovi Grossman, and Daniel Wigdor. 2016. Object- oriented drawing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4610â4621. Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. arXiv preprint arXiv:2107.07567 (2021). [22] Xinchao Xu, Zhibin Gou, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang, and Shihang Wang. 2022. Long Time No See! Open-Domain Conversation with Long-Term Persona Memory. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland, 2639â2650. https://doi.org/10.18653/v1/2022.findings-acl.207 | 2308.01542#17 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 17 | Fig. 3. Object Misidentification. In this scenario, the objective is to locate a pan; however, ReAct mistakenly misidentifies another object as the pan.
In the AlfWorld environment, we encountered a mission type named âpick 2â, where the agent is required to find two identical objects. We observed that ReAct alone tends to forget its previous locations, resulting in inefficient trajectories characterized by frequent revisits to the same place. In some instances, this led to hallucinations, defined as consecutive identical actions with the environment responding similarly. To address this issue, we made changes to the original modelâs prompt. After finding the first object and placing it in the corresponding receptacle, we allow the model to autonomously generate a trajectory while ensuring that this path does not overlook areas where the second object might be present, as shown in Figure 4.
More details about prompts, we refer the reader to Ap- pendix A.
# IV. EVALUATION | 2308.01552#17 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01542 | 18 | [23] Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295â305.
[24] Hanxun Zhong, Zhicheng Dou, Yutao Zhu, Hongjin Qian, and Ji-Rong Wen. 2022. Less is more: Learning to refine dialogue history for personalized dialogue generation. arXiv preprint arXiv:2204.08128 (2022). | 2308.01542#18 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | [
{
"id": "2210.08750"
},
{
"id": "2107.07567"
},
{
"id": "2304.03442"
},
{
"id": "2307.03172"
},
{
"id": "2204.08128"
},
{
"id": "2307.01142"
}
] |
2308.01552 | 18 | More details about prompts, we refer the reader to Ap- pendix A.
# IV. EVALUATION
In this section, we present a comparative analysis of the performance enhancement provided by the helpers (sorter or checker) and the new trajectory planning when compared to the baseline model. Our findings demonstrate that InterAct consistently outperforms ReAct on AlfWorld (as shown in Table I) across all tasks. On AlfWorld, the top-performing InterAct trial achieves an impressive average success rate of 98%, falling short in only 2 out of 134 tasks. This performance is significantly better than the best trials of ReAct (73%) and BUTLER (37%). Indeed, InterAct has demonstrated excep- tional proficiency in handling these tasks, as evidenced by achieving a 100% success rate in four out of the six tasks. This performance showcases InterActâs remarkable ability to effectively manage and succeed in various tasks. Notably, even when ReAct is augmented only with a checker or sorter, the overall average performance surpasses that of ReAct without helpers by a significant margin. The tasks that show the most substantial improvement are âpick2â and âclean,â with an approximate gain of 47% and 41%. From a qualitative standpoint, we observed that ReAct, without any helper, faces difficulties in accurately determining the presence of items in a specific location or employing ineffective search strategies. | 2308.01552#18 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 19 | Your task is to: find two pillow and put them in sofa. âAct 2: think: Fist I need to find the first [pllow]]-A pillow is more likely to appear in armehair (2), cabinet (1-4), \ drawer (1-5), dresser (1), garbagecan (1), safe (1), shelf (1-12), sidetable (1), sofa (1). | can check one by one, starting with armchair 3. Obs 2: OX Act 3: goto armchair bs 3: On the armchair 3, you see a pillow 2, a pillow 1, and a remotecontral 1. Act 4: think: Now | find the first pillow (1). Next, | need to take it. Obs 4: 0x âAc 5: tak pillow 1 from armchair 2 âObs 5: You pick up the pillow 1 from the armchair 1. Act 6: think: Now | take the first pillow (1). Next, | need to put it in/on sofa 1. Obs 6: OK Act 7: goto sofa 1 Obs 7: On the sofa 1, you see nothing. âAct 8: put pillow 1 in/on sofa 1 Obs 8: You put the pillow 1 in/on the | 2308.01552#19 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 20 | 7: On the sofa 1, you see nothing. âAct 8: put pillow 1 in/on sofa 1 Obs 8: You put the pillow 1 in/on the sofa 1. Act 9: think: Now | put the first pillow in sofa. Next, | need to find the second [[pillow]]}. | have visited armchair (1), so the second pillow is more likely to appear in cabinet (1-4), drawer (1-5), dresser (1), garbagecan (1), safe (1), shelf (1-12), sidetable (1), sofa (1). Ican check one by one, starting with cabinet 1 J) Obs 9: OK. Your task is to: find two pillow and put them in sofa. (Obs 6: On the armchair 1, you see a cellphone 2, a pillow 2, and a pillow 1. âAct 7: think: Now I find the first pillow (1). Next, | need to take it. Obs 7: OK. Act 8: take pillow 1 from armchair 1 Obs 8: You pick up the pillow 1 from the armchair 1 âAct 9: think: Now | take the first pillow (1). Next, I need to put it infon sofa 1 (Obs 9: OK. Act | 2308.01552#20 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 21 | the armchair 1 âAct 9: think: Now | take the first pillow (1). Next, I need to put it infon sofa 1 (Obs 9: OK. Act 10: goto sofa 1 Obs 10: On the sofa 1, you see a newspaper 3 âAct 11: put pillow 1 in/on sofa 1 (Obs 11: You put the pillow 1 in/on the sofa 1 Act 12: think: Now I put the first pillow in sofa. Next, I need to find the second {[pillow]]. | should go back to armchair 1 to see if there is more pillow. If not, | will continue to check drawer (1-5), sidetable (1), cabinet (1-4), dresser (1), shelf (1-12), garbagecan (1), safe (1) one by one to find pillow. Obs 12: OK âAct 13: go to armchair 1 (bs 13: On the armchair 1, you see a cellphone 2, and a pillow 2. âAct 14: think: Now I find the second pillow (2). Next, I need to take It Obs 14: OK (Obs 18: You put the pillow 2 in/on the sofa 1. | 2308.01552#21 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 22 | Fig. 4. Trajectory planning. In the initial scenario, the agent fails to retrieve the second pillow from the armchair after placing the first pillow on the sofa. Consequently, the agent cannot find the second pillow, resulting in an incomplete task. In the revised scenario, InterAct addresses this issue by considering the future search trajectory. It prioritizes returning to the armchair to search for the second pillow before exploring the other areas. This approach improves the chances of successfully locating the second pillow and completing the task.
TABLE I ALFWORLD TASK-SPECIFIC SUCCESS RATES (%).
Method Pick Clean Heat Cool Look Pick2 BUTLERg 33 6 70 76 17 12 BUTLER 65 39 83 76 55 24 Act 88 41 76 67 73 43 ReAct 88 55 90 81 75 53 ReAct+checker 85 81 100 87 92 75 ReAct+sorter 84 76 88 73 80 67 InterAct 100 96 100 94 100 100 All 46 57 46 73 86 78 98
# V. DISCUSSION AND LIMITATIONS
A. Scalability of InterAct
Our InterAct model is scalable and adaptable to different datasets and scenarios. For instance, if thereâs a need for a feature similar to âmemories,â we can develop an interpreter to describe the current path, among other things, without having to train numerous different language models. This is possible because ChatGPT serves as an excellent backbone for such extensions. | 2308.01552#22 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 23 | B. Error assessment with a supervisor module
Despite achieving an impressive average performance of 98% on the AlfWorld dataset, our analysis of failed trajecto- ries uncovered certain limitations. One notable drawback is the modelâs heavy reliance on prompt completeness within InterAct. When our examples contain missing or unaddressed components, the model fails to detect these errors, resulting in repetitive actions, even for trivial mistakes. To overcome this issue, we explored the possibility of using an alternative ChatGPT model as a supervisor to identify such errors. How- ever, itâs important to acknowledge that the accuracy of the supervisorâs judgment cannot be guaranteed, and there may be occasional misidentifications leading to âaction errors.â
In order to tackle the challenge of error detection, we conducted a comparison between ChatGPT and GPT-4. The results demonstrated a significant improvement in error de- tection performance with GPT-4. Unfortunately, GPT-4 is currently unavailable as an open-source model and cannot be accessed free of charge. Conducting extensive simulations using GPT-4 requires funding support.
C. Insufficiency of the dataset | 2308.01552#23 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 24 | C. Insufficiency of the dataset
While AlfWorld is a valuable platform for assessing AI per- formance, it has certain limitations. Primarily, it encompasses only six types of tasks, and even within these categories, the task quantity is quite limited. These restrictions neither fully test nor make optimal use of the AI systemsâ capabilities. If we move to an environment offering a larger range and diversity of tasks, as well as a broader and more varied set of locations, our model will still need improvement to maintain its current level of accuracy. This aspect will be our focus for future research.
# VI. CONCLUSION
Our research is centered on enhancing the task planning capabilities of large language models. We developed a new model, InterAct, built upon the framework of the ReAct model. InterAct is a culmination of various âhelpersâ (like checkers and sorters) and aims to improve upon the existing trajectory. We evaluated this framework in the AlfWorld sim- ulated environment, where it showed a substantial increase in decision-making accuracy, soaring from 75% to an impressive 98%. This highlights the vast potential of these models in AI- driven systems and technologies.
In essence, this study underscores the revolutionary poten- tial of AI language models like ChatGPT and their pivotal | 2308.01552#24 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 25 | In essence, this study underscores the revolutionary poten- tial of AI language models like ChatGPT and their pivotal
role in shaping future real-world interactions. As we continue to delve into their capabilities, we are on the cusp of a new technological era marked by not only intelligence but also intuitiveness and responsiveness to human needs.
# REFERENCES
[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin, âAttention is all you need,â Advances in Neural Information Processing Systems, vol. 30, 2017.
[2] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, âExploring the limits of transfer learning with a unified text-to-text transformer,â The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485â5551, 2020. | 2308.01552#25 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 26 | [3] OpenAI, âGPT-4 technical report,â arXiv, 2023. [4] âOpenAI, ChatGPT. https://chat.openai.com.â 2023. [5] M. Shridhar, X. Yuan, M.-A. CËot´e, Y. Bisk, A. Trischler, and M. Hausknecht, âAlfworld: Aligning text and embodied environments for interactive learning,â arXiv preprint arXiv:2010.03768, 2020. [6] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, âReAct: Synergizing reasoning and acting in language models,â arXiv preprint arXiv:2210.03629, 2022.
[7] N. Shinn, B. Labash, and A. Gopinath, âReflexion: an autonomous agent with dynamic memory and self-reflection,â arXiv preprint arXiv:2303.11366, 2023. | 2308.01552#26 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 27 | [8] W. Fedus, B. Zoph, and N. Shazeer, âSwitch transformers: Scaling to trillion parameter models with simple and efficient sparsity,â The Journal of Machine Learning Research, vol. 23, no. 1, pp. 5232â5270, 2022.
[9] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage models learners,â Advances in Neural Information Processing are few-shot systems, vol. 33, pp. 1877â1901, 2020.
[10] L. Zhuang, L. Wayne, S. Ya, and Z. Jun, âA robustly optimized bert pre-training approach with post-training,â in Proceedings of the 20th Chinese national conference on computational linguistics, 2021, pp. 1218â1227. | 2308.01552#27 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 28 | [11] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, âSwin transformer: Hierarchical vision transformer using shifted win- dows,â in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012â10 022.
[12] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, âSwinir: Image restoration using swin transformer,â in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 1833â1844. | 2308.01552#28 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 29 | [13] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch, âDecision transformer: Re- learning via sequence modeling,â Advances in Neural inforcement Information Processing systems, vol. 34, pp. 15 084â15 097, 2021. [14] K.-H. Lee, O. Nachum, M. S. Yang, L. Lee, D. Freeman, S. Guadarrama, I. Fischer, W. Xu, E. Jang, H. Michalewski et al., âMulti-game decision transformers,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 921â27 936, 2022.
[15] K. Alexis, C. Papachristos, R. Siegwart, and A. Tzes, âUniform cov- erage structural inspection pathâplanning for micro aerial vehicles,â in 2015 IEEE international symposium on intelligent control (ISIC). IEEE, 2015, pp. 59â64. | 2308.01552#29 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 30 | [16] D. S. Chaplot, D. Pathak, and J. Malik, âDifferentiable spatial planning using transformers,â in International Conference on Machine Learning. PMLR, 2021, pp. 1484â1495.
[17] K. He, X. Chen, S. Xie, Y. Li, P. Doll´ar, and R. Girshick, âMasked autoencoders are scalable vision learners,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000â16 009.
[18] D. Park and S. Y. Chun, âClassification based grasp detection using spatial transformer network,â arXiv preprint arXiv:1803.01356, 2018. [19] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu et al., âRt-1: Robotics transformer for real-world control at scale,â arXiv preprint arXiv:2212.06817, 2022. | 2308.01552#30 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 31 | [20] M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog et al., âDo as i can, not as i say: Grounding language in robotic affordances,â arXiv preprint arXiv:2204.01691, 2022.
[21] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., âPalm: Scaling language modeling with pathways,â arXiv preprint arXiv:2204.02311, 2022.
[22] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar et al., âInner monologue: Embod- ied reasoning through planning with language models,â arXiv preprint arXiv:2207.05608, 2022. | 2308.01552#31 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 32 | [23] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, âLanguage models as zero-shot planners: Extracting actionable knowledge for embodied agents,â in International Conference on Machine Learning. PMLR, 2022, pp. 9118â9147.
[24] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, âChatgpt for robotics: Design principles and model abilities,â 2023, 2023.
[25] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, âChatgpt Empowered Long-Step Robot Control in Various Environ- ments: A Case Application,â arXiv preprint arXiv:2304.03893, 2023.
[26] G. Lu, S. Li, G. Mai, J. Sun, D. Zhu, L. Chai, H. Sun, X. Wang, H. Dai, N. Liu et al., âAGI for agriculture,â arXiv preprint arXiv:2304.06136, 2023. | 2308.01552#32 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 33 | [27] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma et al., âScaling instruction-finetuned language models,â arXiv preprint arXiv:2210.11416, 2022.
[28] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., âEmergent abilities of large language models,â arXiv preprint arXiv:2206.07682, 2022. [29] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou, âChain of thought prompting elicits reasoning in large language models,â arXiv preprint arXiv:2201.11903, 2022.
[30] A. Creswell and M. Shanahan, âFaithful reasoning using large language models,â arXiv preprint arXiv:2208.14271, 2022. | 2308.01552#33 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 34 | [31] M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan, A. Lewkowycz, M. Bosma, D. Luan et al., âShow your work: Scratchpads for intermediate computation with language models,â arXiv preprint arXiv:2112.00114, 2021.
[32] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang, âDescribe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents,â arXiv preprint arXiv:2302.01560, 2023. [33] V. Nair, E. Schumacher, G. Tso, and A. Kannan, âDERA: enhanc- ing large language model completions with dialog-enabled resolving agents,â arXiv preprint arXiv:2303.17071, 2023. | 2308.01552#34 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 35 | [34] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang et al., âSelf-refine: Iterative refinement with self-feedback,â arXiv preprint arXiv:2303.17651, 2023. [35] M. Shridhar, J. Thomason, D. Gordon, Y. Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, and D. Fox, âAlfred: A benchmark for interpret- ing grounded instructions for everyday tasks,â in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 740â10 749.
# APPENDIX A ALFWORLD EXPERIMENT DETAILS | 2308.01552#35 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 36 | # APPENDIX A ALFWORLD EXPERIMENT DETAILS
Hereâs an example in the prompt of InterAct in heat task. In the prompt, we enclose the items that need to be found within double square brackets, so that they can be fed to the checker and sorter later. InterAct will search for the items based on the sorting result. Each time we arrive at a location, we ask the checker to find the required item. If it is found, we consider the things present at that location as observations and return them to the LLM. If the item is not found, we ask the checker to return the message âObject is not here.â.
# <heat> Prompt
You are in the middle of a room. Looking quickly around you, you see a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 3, a countertop 2, a countertop 1, a diningtable 1, a drawer 6, a drawer 5, a | 2308.01552#36 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 37 | drawer 4, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. Your task is to: heat some egg and put it in diningtable. > think: To solve the task, I need to find and take an egg, then heat it with microwave, then put it in diningtable. OK. > think: First I need to find an [[egg]]. OK. > sorting(egg) [a fridge 1, a countertop 1, a countertop 2, a countertop 3, a diningtable 1, a sinkbasin 1, a garbagecan 1, a microwave 1, a cabinet 1, a cabinet 2, a cabinet 3, a cabinet 4, a cabinet 5, a cabinet 6, a cabinet 7, a cabinet 8, a cabinet 9, a cabinet 10, a stoveburner 1, a stoveburner 2, a stoveburner 3, a stoveburner 4, a drawer 1, a drawer 2, a drawer 3, | 2308.01552#37 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 38 | 1, a stoveburner 2, a stoveburner 3, a stoveburner 4, a drawer 1, a drawer 2, a drawer 3, a drawer 4, a drawer 5, a drawer 6, a coffeemachine 1, a toaster 1] > think: Based on the sorting provided, I can check fridge (1), countertop (1-3), diningtable (1), sinkbasin (1), garbagecan (1), microwave (1), cabinet (1-10), stoveburner (1-4), drawer (1-6), coffeemachine (1), toaster (1) one by one to find egg, starting with fridge 1. OK. > go to fridge 1 The fridge 1 is closed. > open fridge 1 egg is not here. > go to countertop 1 egg is not here. > go to countertop 2 egg is not here. > go to countertop 3 On the countertop 3, you see a bowl 1, a butterknife 1, a egg 2, a kettle 2, a plate 1, a sink 1, and a spatula 2. > think: Now I find an egg (2). Next, I need to take it. OK. > take egg 2 from countertop 3 You pick up the egg 2 from the countertop | 2308.01552#38 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 39 | > think: Now I find an egg (2). Next, I need to take it. OK. > take egg 2 from countertop 3 You pick up the egg 2 from the countertop 3. > think: Now I take an egg (2). Next, I need go to a microwave (1) and heat it. OK. > go to microwave 1 The microwave 1 is closed. > heat egg 2 with microwave 1 You heat the egg 2 using the microwave 1. > think: Now I heat an egg (2). Next, I need to put it in/on diningtable 1. OK. > go to diningtable 1 On the diningtable 1, you see a apple 2, a bread 3, a egg 1, a kettle 1, a knife 1, a mug 1, a papertowelroll 1, a peppershaker 2, a potato 1, a soapbottle 1, and a spatula 1. > put egg 2 in/on diningtable 1 You put the egg 2 in/on the diningtable 1. | 2308.01552#39 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 40 | In the âpick2â task, we have incorporated new trajectory planning. After locating and placing the first object in the appropriate receptacle, we enable the model to independently generate a trajectory, ensuring that the path does not overlook areas where the second object may be located.
# <picktwo> Prompt
You are in the middle of a room. Looking quickly around you, you see a armchair 2, a armchair 1, a bed 1, a countertop 1, a diningtable 1, a drawer 2, a drawer 1, a dresser 1, a laundryhamper 1, and a sidetable 1. Your task is to: put two creditcard in dresser. > think: To solve the task, I need to find and take the first | 2308.01552#40 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 41 | creditcard, then put it in dresser, then find and take the second creditcard, then put it in dresser. OK. > think: First I need to find the first [[creditcard]]. OK. > sorting(creditcard) [countertop 1, diningtable 1, sidetable 1, armchair 1, armchair 2, bed 1, drawer 1, drawer 2, dresser 1, laundryhamper 1] > think: Based on the sorting provided, I can check couter- top (1), diningtable (1), sidetable (1), armchair (1-2), bed (1), drawer (1-2), dresser (1), laundryhamper (1) one by one to find creditcard, starting with countertop 1. OK. > go to countertop 1 creditcard is not here. > go to diningtable 1 On the diningtable 1, you see a cellphone 2, a creditcard 4, a creditcard 3, a creditcard 2, a mirror 1, a pencil 2, and a pencil 1. > think: Now I find the first creditcard (2). Next, I need to take it. OK. > take creditcard 2 from diningtable 1 You pick up the creditcard 2 from the diningtable 1. > think: Now I take the first creditcard | 2308.01552#41 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 42 | to take it. OK. > take creditcard 2 from diningtable 1 You pick up the creditcard 2 from the diningtable 1. > think: Now I take the first creditcard (2). Next, I need to put it in/on dresser 1. OK. > go to dresser 1 On the dresser 1, you see a mug 1, and a television 1. > put creditcard 2 in/on dresser 1 You put the creditcard 2 in/on the dresser 1. > think: Now I put the first creditcard in dresser. Next, I need to find the second [[creditcard]]. I should go back to diningtable 1 to see if there is more creditcard. If not, I will continue to check sidetable (1), armchair (1-2), bed (1), drawer (1-2), dresser (1), laundryhamper (1) one by one to find credit- card. OK. > go to diningtable 1 On the diningtable 1, you see a cellphone 2, a creditcard 4, a creditcard 3, a mirror 1, a pencil 2, and a pencil 1. > think: Now I find the second creditcard (3). Next, I need to take it. OK. > take creditcard 3 from diningtable 1 You pick up the creditcard 3 from the | 2308.01552#42 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01552 | 43 | Now I find the second creditcard (3). Next, I need to take it. OK. > take creditcard 3 from diningtable 1 You pick up the creditcard 3 from the diningtable 1. > think: Now I take the second creditcard (3). Next, I need to put it in/on dresser 1. OK. > go to dresser 1 On the dresser 1, you see a creditcard 2, a mug 1, and a television 1. > put creditcard 3 in/on dresser 1 You put the creditcard 3 in/on the dresser 1. | 2308.01552#43 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | [
{
"id": "2206.07682"
},
{
"id": "2212.06817"
},
{
"id": "2302.01560"
},
{
"id": "2204.02311"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "1803.01356"
},
{
"id": "2303.11366"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2304.06136"
},
{
"id": "2208.14271"
},
{
"id": "2112.00114"
},
{
"id": "2303.17651"
},
{
"id": "2201.11903"
},
{
"id": "2303.17071"
},
{
"id": "2210.11416"
},
{
"id": "2010.03768"
}
] |
2308.01320 | 0 | 3 2 0 2
g u A 2 ] G L . s c [
1 v 0 2 3 1 0 . 8 0 3 2 : v i X r a
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
Deepspeed of Microsoft
# Abstract | 2308.01320#0 | DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales | ChatGPT-like models have revolutionized various applications in artificial
intelligence, from summarization and coding to translation, matching or even
surpassing human performance. However, the current landscape lacks an
accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement
Learning with Human Feedback) training pipeline for these powerful models,
particularly when training at the scale of billions of parameters. This paper
introduces DeepSpeed-Chat, a novel system that democratizes RLHF training,
making it accessible to the AI community. DeepSpeed-Chat offers three key
capabilities: an easy-to-use training and inference experience for ChatGPT-like
models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from
InstructGPT, and a robust DeepSpeed-RLHF system that combines various
optimizations for training and inference in a unified way. The system delivers
unparalleled efficiency and scalability, enabling training of models with
hundreds of billions of parameters in record time and at a fraction of the
cost. With this development, DeepSpeed-Chat paves the way for broader access to
advanced RLHF training, even for data scientists with limited resources,
thereby fostering innovation and further development in the field of AI. | http://arxiv.org/pdf/2308.01320 | Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He | cs.LG, cs.AI, cs.CL | 14 pages, 7 figures | null | cs.LG | 20230802 | 20230802 | [
{
"id": "1707.06347"
},
{
"id": "2106.09685"
},
{
"id": "1806.03822"
},
{
"id": "1910.03771"
},
{
"id": "2205.01068"
}
] |
2308.01390 | 0 | 3 2 0 2
g u A 7 ] V C . s c [
2 v 0 9 3 1 0 . 8 0 3 2 : v i X r a
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Anas Awadallaâ1 Irena Gaoâ2 Josh Gardner1 Jack Hessel3 Yusuf Hanafy1 Wanrong Zhu5 Shiori Sagawa2 Kalyani Marathe1 Jenia Jitsev4,9 Yonatan Bitton6 Simon Kornblith8 Samir Gadre7 Pang Wei Koh1,8 Gabriel Ilharco1 Mitchell Wortsman1 Ludwig Schmidt1,3,4
# Abstract
OpenFlamingo average performance compared to Flamingo
We introduce OpenFlamingo, a family of au- toregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an on- going effort to produce an open-source replica- tion of DeepMindâs Flamingo models [3]. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of correspond- ing Flamingo performance. This technical re- port describes our models, training data, hy- perparameters, and evaluation suite. We share our models and code at https://github.com/ mlfoundations/open_flamingo.
1
# Introduction | 2308.01390#0 | OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models | We introduce OpenFlamingo, a family of autoregressive vision-language models
ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce
an open-source replication of DeepMind's Flamingo models. On seven
vision-language datasets, OpenFlamingo models average between 80 - 89% of
corresponding Flamingo performance. This technical report describes our models,
training data, hyperparameters, and evaluation suite. We share our models and
code at https://github.com/mlfoundations/open_flamingo. | http://arxiv.org/pdf/2308.01390 | Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20230802 | 20230807 | [
{
"id": "1909.11059"
},
{
"id": "2306.05425"
},
{
"id": "2108.07258"
},
{
"id": "2205.12005"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2210.08402"
},
{
"id": "2306.16527"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2209.06794"
},
{
"id": "2304.11277"
},
{
"id": "2305.04790"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2005.04790"
},
{
"id": "2301.12597"
},
{
"id": "2201.07520"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.14108"
},
{
"id": "2304.06939"
},
{
"id": "2207.07635"
},
{
"id": "2109.01652"
},
{
"id": "2210.05916"
}
] |
2308.01320 | 1 | Deepspeed of Microsoft
# Abstract
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human per- formance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF train- ing, making it accessible to the AI community. DeepSpeed-Chat offers three key ca- pabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, en- abling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
# 1 Overview | 2308.01320#1 | DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales | ChatGPT-like models have revolutionized various applications in artificial
intelligence, from summarization and coding to translation, matching or even
surpassing human performance. However, the current landscape lacks an
accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement
Learning with Human Feedback) training pipeline for these powerful models,
particularly when training at the scale of billions of parameters. This paper
introduces DeepSpeed-Chat, a novel system that democratizes RLHF training,
making it accessible to the AI community. DeepSpeed-Chat offers three key
capabilities: an easy-to-use training and inference experience for ChatGPT-like
models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from
InstructGPT, and a robust DeepSpeed-RLHF system that combines various
optimizations for training and inference in a unified way. The system delivers
unparalleled efficiency and scalability, enabling training of models with
hundreds of billions of parameters in record time and at a fraction of the
cost. With this development, DeepSpeed-Chat paves the way for broader access to
advanced RLHF training, even for data scientists with limited resources,
thereby fostering innovation and further development in the field of AI. | http://arxiv.org/pdf/2308.01320 | Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He | cs.LG, cs.AI, cs.CL | 14 pages, 7 figures | null | cs.LG | 20230802 | 20230802 | [
{
"id": "1707.06347"
},
{
"id": "2106.09685"
},
{
"id": "1806.03822"
},
{
"id": "1910.03771"
},
{
"id": "2205.01068"
}
] |
2308.01390 | 1 | 1
# Introduction
A popular format for vision and language mod- els is (image, text) â text, i.e., models take as input an image and some text, and produce text as output, e.g., BLIP-2 [22].The flexible format directly supports tasks like image classification and visual question answering (VQA).
E oy B 90% 5 80% a 2 70% & & 60% g 50% mG % 40% ro % 30% a 20% g 10% 0% OF-3B OF-3B (1) OF-4B OF-4B (1) OF-9B OpenFlamingo model
Figure 1: OpenFlamingo performance as a fraction of corresponding Flamingo performance, averaged across evaluation settings (7 datasets à 5 options for number of in-context examples). Demonstrations are chosen using RICES (Retrieval-based In-Context Example Selection). More details regarding selecting demon- strations can be found in Section 3.4. We compare OpenFlamingo-3B and -4B models to Flamingo-3B, and OpenFlamingo-9B to Flamingo-9B. Error bars are standard deviations over settings. âOF-3B (I)â refers to OpenFlamingo-3B (Instruct), the 3B model trained with a language-instruction-tuned backbone. | 2308.01390#1 | OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models | We introduce OpenFlamingo, a family of autoregressive vision-language models
ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce
an open-source replication of DeepMind's Flamingo models. On seven
vision-language datasets, OpenFlamingo models average between 80 - 89% of
corresponding Flamingo performance. This technical report describes our models,
training data, hyperparameters, and evaluation suite. We share our models and
code at https://github.com/mlfoundations/open_flamingo. | http://arxiv.org/pdf/2308.01390 | Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20230802 | 20230807 | [
{
"id": "1909.11059"
},
{
"id": "2306.05425"
},
{
"id": "2108.07258"
},
{
"id": "2205.12005"
},
{
"id": "2304.14178"
},
{
"id": "2205.01068"
},
{
"id": "2210.08402"
},
{
"id": "2306.16527"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2209.06794"
},
{
"id": "2304.11277"
},
{
"id": "2305.04790"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2005.04790"
},
{
"id": "2301.12597"
},
{
"id": "2201.07520"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.14108"
},
{
"id": "2304.06939"
},
{
"id": "2207.07635"
},
{
"id": "2109.01652"
},
{
"id": "2210.05916"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.