doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.09958 | 6 | Once the instruction-tuned LLM is given, we follow [12] to perform the two-stage LLaVA lightning training: (i) Stage 1: Pre-training for Feature Alignment. The linear projection layer is trained, which maps the visual feature (the features before the last layer of the pre-trained image encoder) to word embedding space of LLM. More specifcally, the projection dimension is 1024â6656 for the 33B model and 1024â8192 for the 65B model, respectively. In this stage, we use the concept- balanced subset of LAION-CC-SBU data with 558K samples. (ii) Stage 2: Visual Instruction Tuning. We use the LLaVA-80K multimodal instruct dataset for the ï¬ne-tuning stage. Various training schedules are explored to enable the model to follow the diverse instructions to complete tasks in the wild, as to be detailed below.
Tuning Methods. We explore both the trainable modules and training data mixing for efï¬cient and effective visual instruct tuning of large models. | 2309.09958#6 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 6 | To incorporate agent AI into video games, we main design an infrastructure - MINDAGENT - in- spired by multi-agent task allocation optimization theories to facilitate LLM multi-agent planning capabilities. Our infrastructure enables LLMs to perform complex coordination and scheduling with multiple different agents. We conduct comprehensive evaluations with recently introduced LLMs playing our game with our infrastructure, including GPT-4, Claude, and LLaMA. Through the proposed MINDAGENT interactive multi-agent planning framework for LLMs, we make the fol- lowing key observations: 1) zero shot multi-agent planning: Without bells and whistles, powerful pretrained LLMs like GPT-4 are capable of scheduling multiple agents (ranging from 2 to 4) into completing dishes, and even collaborate with human players, by merely reading simple game in- structions and recipes; 2) planning with advanced prompting: We are able to significantly boost their multi-agent planning performances by leveraging the emergent in-context learning capabil- ity (Brown et al., 2020; Wei et al., 2021): adding very few expert demonstrations even from dif- ferent game levels to the prompt, explaining the | 2309.09971#6 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 7 | Tuning Methods. We explore both the trainable modules and training data mixing for efï¬cient and effective visual instruct tuning of large models.
In addition to tuning the linear projection layer, two schemes are consid- ered to tune the LLM: (i) Full-model ï¬ne-tuning of LLM and (ii) Parameter-efï¬cient training methods. For the latter, LoRA [7] and QLoRA [4] are employed to allow us to tune large mod- els with limited compute resource. This aims to gain an in-depth understanding of the trade-off between the training cost and model performance.
⢠Data mixing. Typically only the multimodal instruction data is used in Stage-2. We further consider mixing the language-only instruct data ShareGPT with the LLaVA-80K multimodal instruction data to gain an in-depth understanding of the trade-off between modelsâ language and multimodal capabilities. | 2309.09958#7 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 7 | (Brown et al., 2020; Wei et al., 2021): adding very few expert demonstrations even from dif- ferent game levels to the prompt, explaining the rationale of certain actions as in Chain-of-Thought prompting (Wei et al., 2022), and providing on-the-fly feedback to the LLMs during planning; 3) generalist potentials: LLMs exhibits great potentials of being generalist multi-agent planner as it has strong generalization to coordinate more agents with examples of fewer agents, and adaptation to new game domains like Minecraft. | 2309.09971#7 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 8 | In the training process of both stages, we utilize the DeepSpeed library 3 and Hyper-parameters. employ the ZeRO3 optimizer, except for QLoRA runs we use ZeRO2. We use a maximum sequence length of 2048. For Stage 1, we train both the 33B and 65B models with a learning rate of 1Ã10â4 with no weight decay, and a learning rate with linear decay and linear warmup for 3% of training steps in total. For Stage 2, we use a learning rate of 2 à 10â5 in full ï¬ne-tuning to train 1 epoch for all the models in full ï¬netuning, and a learning rate of 1 à 10â4 for the LoRA/QLoRA runs. We conducted a set of hyperparameter search and for LoRA runs, and found larger LoRA alpha or equivalently larger learning rate was crucial to get the best performance. Speciï¬cally, we use LoRA alpha equals 2 times the LoRA rank, and a learning rate of 1Ã10â4, which works the best for all the models. For full ï¬ne-tuning, we use a total batch size of 512 on 4 A100 nodes, where each of these nodes is equipped with 8 A100-80G GPUs. For LoRA/QLoRA runs, | 2309.09958#8 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 8 | While compared to canonical domain-specific automated planning systems, multi-agent planning with LLMs can still be bottlenecked by challenging computation cost, context length limitation, non-optimal plans, etc., it has the potential of improving from data without fine-tuning (via in- context learning), seamlessly adapting to planning problems from different domains and offering more flexible interfaces. We hope our findings on LLMs for general-purpose scheduling and coor- dination can help shed some light on how such skills can be obtained by learning from large text corpora, and facilitate the emergence of better LLM planners.
To summarize, our key contributions are as follows:
⢠We establish a new gaming scenario and related benchmark based on a multi-agent virtual kitchen environment, CUISINEWORLD. It adopts a minimal text-based game format and supports various planning task structures and difficulties, making it an ideal test bed for the emergent multi-agent planning (scheduling and coordination) capacity of LLMs.
⢠We introduce MINDAGENT, an infrastructure for interactive multi-agent planning with LLMs, which demonstrates the in-context learning multi-agent planning capacity of LLMs and brings several prompting techniques that help facilitate their planning ability, including providing few- shot demonstrations, planning rationals, and environmental feedback.
2 | 2309.09971#8 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 9 | 2
⢠We conduct extensive evaluations with multiple LLMs and prompting settings on our benchmark. Experimental results confirm their potential on being generalist multi-agent planners in terms of generalizing to more agents.
⢠We deploy our system into real-world gaming scenarios and demonstrate its capabilities in human- AI interactions.
2 RELATED WORK
Multi-Agent Coordination. The field of multi-agent collaborations boasts a comprehensive body of literature. Traditionally, such collaborations have been modeled using MDP/POMDP (Lowe et al., 2017; Rashid et al., 2020; Jain et al., 2019) frameworks. | 2309.09971#9 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 10 | # 3 Results
We ï¬rst compare our large checkpoints on two recent benchmarks which are speciï¬cally designed for LMM, then report our ï¬ndings in the course of scaling up LLaVA models.
# 1https://huggingface.co/lmsys/vicuna-33b-v1.3 2https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md 3https://github.com/microsoft/DeepSpeed
2
Models Reasoning Conversation Detail Overall Bard-0718 Bing-Chat-0629 78.7 90.1 83.7 59.6 69.7 52.2 77.8 71.5 LLaVA-13B (beam=1) LLaVA-13B (beam=5) LLaVA-33B (beam=1) LLaVA-33B (beam=5) LLaVA-65B (beam=1) LLaVA-65B (beam=5) 81.7 84.3 82.9 83.5 87.3 88.7 64.3 68.4 70.2 72.6 63.8 59.4 55.9 59.9 62.6 61.9 62.3 65.7 70.1 73.5 73.9 74.8 74.2 74.4 | 2309.09958#10 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 10 | However, there has been a recent shift towards utilizing Large Language Models (LLMs) for these collaborations. For instance, Zhang et al. (2023b) delved into how large language models might communicate and cooperate in a watch-and-help (WAH) task. Meanwhile, Zhang et al. (2023a) investigated a two-agent collaboration game inspired by the simpler dynamics of the two-agent Overcooked-style game. Notably, their research chiefly concentrated on the task success rate, with most studies typically anchored to a singular task objective. In contrast, we emphasize the impor- tance of collaboration efficiency in scenarios encompassing multiple task objectives. Further, our research uniquely focuses on evaluating the collaborative efficiency of more than two agents. Ad- ditionally, while other works like Park et al. (2023) simulate each agent individually, we employ a centralized system. This approach not only significantly reduces the number of API calls but also reduces context length, making it more appropriate for gaming applications. | 2309.09971#10 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 11 | Planning with LLMs. There exists a number of works that leverage LLMs to perform task planning (Huang et al., 2022a; Wang et al., 2023a; Yao et al., 2023). They leverage the LLMsâ internet-scale domain knowledge and emergent zero-shot planning abilities to perform complex task planning and reasoning. Recent works in robotics also leverage LLMs to perform task planning, they decompose a natural language instruction into a sequence of subtasks, either in natural language form or in python code (Ahn et al., 2022; Huang et al., 2022b; Liang et al., 2022). Then they use a low-level controller to execute these subtasks. Additionally, (Huang et al., 2022b; Liang et al., 2022; Wang et al., 2023b) also incorporate environment feedback to improve task performance. | 2309.09971#11 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 12 | Model Rec OCR Knowledge Generation Spatial Math Total Results of various open-source LMM on reported in the MM-VET paper [19] LLaMA-Adapter v2-7B [5] OpenFlamingo-9B [1, 2] MiniGPT-4-8B [20] BLIP-2-12B [11] LLaVA-7B [12] MiniGPT-4-14B [20] Otter-9B [8] InstructBLIP-14B [3] InstructBLIP-8B [3] LLaVA-13B [12] MM-ReAct-GPT-3.5 [18] LLaVA-7B (LLaMA-2) [12] LLaVA-13B (V1.3, 336px) [12] LLaVA-13B (LLaMA-2) [12] MM-ReAct-GPT-4 [18] 7.8 14.4 15.0 11.1 17.1 16.1 16.4 16.0 14.6 20.1 31.5 20.1 22.3 22.7 65.7 16.8 24.6 27.4 27.5 28.0 29.9 28.4 30.8 32.4 30.9 24.2 32.9 38.1 39.2 | 2309.09958#12 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 12 | Benchmarks using Games. Numerous games have been developed to study task planning Baker et al. (2022); Carroll et al. (2019), yet only a handful delve into multi-agent collaborations. Even within this limited subset, the focus predominantly remains on two-agent interactions where re- sponsibilities are not evenly distributed. As evidenced by (Wan et al., 2022; Puig et al., 2020), itâs common for one player to assume a dominant role while the other provides support. In contrast, our paper assumes equal responsibilities across agents, and we expand our investigation to encompass collaborations involving more than just two agents, even with human players. While some previous studies have ventured into multi-task settings, none have delved into scenarios where agents must complete multiple distinct tasks using competing resources within a single episode. Furthermore, our game presents tasks with varied levels of difficulty.
Additionally, our work distinguishes itself from Carroll et al. (2019). Contrary to their settings, our game settings feature a diverse array of tools and task objectives, thereby generating an exponentially larger task space. A comparison between our work and other related games is shown in Table 1.
# 3 THE NEW GAMING CUISINEWORLD DESIGN AND BENCHMARK | 2309.09971#12 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 13 | 24.6 27.4 27.5 28.0 29.9 28.4 30.8 32.4 30.9 24.2 32.9 38.1 39.2 33.1 2.5 13.0 12.8 11.8 16.3 20.4 19.4 9.8 16.5 23.5 21.5 19.0 25.2 26.5 29.0 3.0 12.3 13.9 7.0 18.9 22.1 20.7 9.0 18.2 26.4 20.7 20.1 25.8 29.3 35.0 16.6 18.0 20.3 16.2 21.2 22.2 19.3 21.1 18.6 24.3 32.3 25.7 31.3 29.6 56.8 4.4 15.0 7.7 5.8 11.5 3.8 15.0 10.5 7.7 7.7 26.2 5.2 11.2 7.7 69.2 13.6±0.2 21.8±0.1 22.1±0.1 22.4±0.2 23.8±0.6 24.4±0.4 24.6±0.2 25.6±0.3 26.2±0.2 26.4±0.1 27.9±0.1 | 2309.09958#13 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 13 | # 3 THE NEW GAMING CUISINEWORLD DESIGN AND BENCHMARK
We introduce CUISINEWORLD as a novel and flexible game for multi-agent scheduling and coor- dination in a virtual kitchen environment. In this game, a multi-agent system needs to overlook multiple agents and coordinate them, with the goal of completing as many dish orders as possible. It is equipped with a textual interface since our focus is evaluating LLM-based planning agents. Our modularized design separates tasks and game engines, allowing more tasks (type of dishes) and domains (how to implement the âkitchenâ: text-based engine, Unity, Minecraft, etc.) to be included.
3 | 2309.09971#13 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 14 | 24.6±0.2 25.6±0.3 26.2±0.2 26.4±0.1 27.9±0.1 28.1±0.4 32.5±0.1 32.9±0.1 44.6±0.2 Results with our own experiment runs LLaVA-13B (LLaMA-2) LLaVA-33B LLaVA-33B (Data Mixing) LLaVA-65B LLaVA-65B (Data Mixing) 38.4 38.5 37.7 39.2 41.8 21.0 25.0 27.1 28.2 27.9 26.3 26.2 26.2 26.2 30.4 28.8 28.2 28.6 28.3 32.3 28.0 29.2 28.1 33.0 30.5 7.7 7.7 11.5 15.0 7.3 32.6±0.1 32.9±0.3 34.1±0.3 35.5±0.3 36.4±0.2 | 2309.09958#14 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 14 | 3
Benchmark ALFWorld (Shridhar et al., 2020) WAH (Puig et al., 2020) TextWorld (CËot´e et al., 2019) Generative Agents (Park et al., 2023) EMATP (Liu et al., 2022) Overcooked-AI (Carroll et al., 2019) HandMeThat (Wan et al., 2022) DialFRED (Gao et al., 2022) TEACH (Padmakumar et al., 2022) CerealBar (Suhr et al., 2019) LIGHT (Urbanek et al., 2019) Diplomacy (Bakhtin et al., 2022) Multi-task â â â â â â â â â â â â Object Interaction â â â â â â â â â â â â Tool Use â â â â â â â â â â â â Maximum Agents 1 2 1 25 2 2 2 2 2 2 1369 7 Collabo- ration â â â â â â â ââ ââ â â â Human in-the-loop â â â â â â â â â â â â CUISINEWORLD (Ours) â â â 4+ â â â | 2309.09971#14 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 15 | Table 2: Performance of various open-source LMM on MM-VET. Note that MM-ReAct is not an single multimodal model, it is a system built on chaining visual tools via GPT-3.5 or GPT-4, which we append as a reference. Our experiment run on LLaVA-13B (LLaMA-2) yields very similar score with the same checkpoint reported in MM-VET paper, indicating that our evaluation pipelines are consistent.
# 3.1 Comparisons on Benchmarks | 2309.09958#15 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 15 | Procedural Level Generation â â â â â â â â â â â â
Table 1: Comparsion between CUISINEWORLD and other related benchmarks. Multi-task: The benchmark contains multiple different tasks. Object Interaction: Agents have to manipulate or engage with different items or environmental elements to achieve certain goals with irreversible actions. Tool Use: Completing tasks necessitates the use of specific tools by the agents. Maximum Agents: This denotes the upper limit of agents that can be present in a single experiment. Collaboration: Many tasks mandate teamwork and collaboration between different agents. Human in-the-loop: The framework allows humans to join the game and collaborate actively with the agents. Procedural Level Generation: Thereâs flexibility in adding new tasks, making the game dynamic and adaptable. â: Notably, even though multiple agents can be present, the second agent is limited to communicating with the first agent. The second agent cannot interact with the environment in an active gaming capacity.
Type goto Arguments agent location Description Move agent to location get agent location (item) agent obtain item from location put agent location agent put everything it holds to location activate agent location agent turn on location noop agent not dispatching agent
Table 2: Action space in CUISINEWORLD.
3.1 TASK DEFINITION | 2309.09971#15 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 16 | # 3.1 Comparisons on Benchmarks
LLaVA-Bench. LLaVA-Bench (In-the-Wild)4 [12] is a diverse evaluation dataset consisting of 24 images with 60 questions in total, including indoor and outdoor scenes, memes, paintings, sketches. Each image is paired with a manually-curated, detailed description and a set of properly-selected questions related to open-ended visual chat scenarios. Each questions belongs to one of three types of tasks: conversations that contain simple visual recognition & QA questions, detailed descriptions that characterize the image with a long paragraph, and a complex reasoning task that focuses on de- ducing implications from an image. Language GPT-4 (gpt4-0314) is used to score to the generated answers. The relative scores between the model output and gold response are reported. We com- pare LLaVA against the commercial visual chat systems including Microsoft BingChat5 and Google Bard6 on LLaVA-Bench [12].
# 4https://github.com/haotian-liu/LLaVA/blob/main/docs/LLaVA_Bench.md 5https://www.bing.com/chat 6https://bard.google.com/
3 | 2309.09958#16 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 16 | Table 2: Action space in CUISINEWORLD.
3.1 TASK DEFINITION
We follow prior works (Yao et al., 2023; Liu et al., 2023; Deng et al., 2023) to interactively evaluate LLMs as planning agents. Overall, the interactive evaluation can be formulated as a Markov Decision Process (S, A, T , R, G), with state space S, action space A, (effectively indicating all the possible schedules that can be made at a single time step), transition dynamics T , reward function R and task instruction space G. Note that, although there are multiple agents inside CUISINEWORLD that can be coordinated, as we mentioned above, we adopt a centralized planning scheme and thereby formulate our game as a single-agent and fully-observable decision-making problem. An illustration of the state & action space and the possible tasks of our game can be found in Figure 1. | 2309.09971#16 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 17 | 3
The results are presented in Table 1. The 33B and 65B checkpoints outperform the 13B LLaVA model and Bing Chat. Despite the fact that LLaVA-Bench is small (thus the comparison might not be statistically signiï¬cant), the results are encouraging: compared to large LMM, small open-sourced LMM are far more cost-effective to be deployed in real-world applications. With negligible increase of inference latency, we can signiï¬cantly improve the performance for all model sizes by increasing the beam search size from 1 to 5. Our results show that larger LLaVA models generally exhibit better performance in tasks involving complex reasoning and generating detailed descriptions, which requires strong language competencies from larger LLM. In addition, larger LLaVA models obtain comparable results to BingChat in multi-turn, multi-modal conversation tasks that require strong image understanding capability. | 2309.09958#17 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 17 | State Space S. In CUISINEWORLD virtual kitchen, there are two types of entity: location and agent. For each entity, the game will provide a set of descriptions, the aggregated descriptions of all entities will be the state returned by our game. A location can be storage, where you could obtain ingredients and dispense waste, a serving table, where you should put the completed dish on, or a cooking tool, e.g. pan, blender. We offer up to two descriptions for each location: inside(location, items), indicating what items (some ingredients, completed dishes, etc.) are now inside the location; and occupy(location), suggesting location is now being used
4
and cannot be touched, e.g. an activated blender. A agent is an entity that can be dispatched to complete the task, and we provide up to three descriptions for each agent: at(location, agent), indicating now agent is at location; hold(agent, items), suggesting what items agent is holding; and finally occupy(agent), implying agent is now operating a tool, e.g. chopping some fruits, and will not respond to any dispatching command. | 2309.09971#17 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 18 | MM-VET. MM-VET [19] is designed based on the assumption that the intriguing capability of solving complicated tasks is often achieved by a generalist LMM which is able to integrate a varity of vision-language (VL) capabilities. MM-Vet contains 200 images and 218 questions (samples), aim- ing to evaluate6 core VL capabilities (recognition, OCR, knowledge, language generation, spatial awareness, and math) and their combinations. For evaluation, an LLM-based evaluator (gpt4-0613) is used to score open-ended outputs of different forms. In Table 2, we report the results on MM- VET. The performance is consistently improved from 13B to 33B and 65B. The largest LLaVA model improves SoTA performance among the end-to-end open-source LMM. The most signiï¬cant improvements are observed when evaluating the capabilities of knowledge and generation, followed by recognition and OCR. The performance on spatial and math remains comparable. The result reveals that the improved LLM capability is instrumental in storing more knowledge in the weights and leading to a stronger language responding capability.
# 3.2 Scaling up LLaVA
The experiments are conducted to answer three research questions. | 2309.09958#18 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 18 | Action Space A. An action in CUISINEWORLD is a list of dispatching commands. Given N agent entities, a total of N commands need to be generated. The agent provides the follow- ing commands (also illustrated in Table 2): 1) goto(agent, location), to let agent move to location; 2) get(agent, location, item), to let agent get a specific item from location; 3) put(agent, location), to put whatever agent is holding into location; 4) activate(agent, location), to let agent turn on location if it is a cooking tool, e.g. blender; 5) noop(agent), to have agent perform no actions in this round of dispatching. We will provide more detailed illustrations and rules about the action space in appendix. Note that, to avoid the possible confusion of multiple agents being dispatched to operate with the same location, the dispatcher also needs to properly order the dispatching commands as they will be executed sequentially. | 2309.09971#18 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 19 | # 3.2 Scaling up LLaVA
The experiments are conducted to answer three research questions.
@ Which scaling factor matters? We study the relative contribution of three scaling-up factors to the performance improvement of LLaVA. The results are summarized in Table 3 (a).
Increasing the model size consistently improves the overall performance. We conjecture that larger data size is essential to train a larger model. For example, if we only train on LLaVA-80K data, we see smaller gain when model size becomes larger.
⢠Image resolution. By ï¬xing the CLIP ViT image encoder, we compare the variants that are pre-trained to take image resolution 224Ã224 and 336Ã336, and ï¬nd that the higher resolution consistently yields 2-3 points improvement across all four LLM sizes.
⢠Data mixing. Larger models tend to have higher capability of ï¬tting the instruction data. By mixing the language-only instruction data (ShareGPT) with LLaVA-80K, we can improve model performance by 2 points, compared to training on multimodal instruction data only.
In Table 3 (b), we present our result on MM-Bench [13], which contains a set of 2,974 questions, which evaluate modelsâ reasoning skills of six categories. The combination of the three factors improve the baseline LLaVA 7B model, reported in [13]. | 2309.09958#19 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 19 | Tasks and Reward. A task in CUISINEWORLD is a dish order, ranging from the most basic tunaSashimi, which can be made by simply chopping some tuna meat, to sophisticated dishes like porkPasta that requires various cooking tools. In a game episode with maximum steps of T , every Ïint steps (we name this task interval), a new task or dish order will be added to the active task list. A task will be viewed as completed and removed from the active task list when a matched dish has been put on the serving table. On the contrary, a task will be deemed to have failed and removed from the list when it reaches its lifetime Ïlft. Lifetime depends on the complexity of the dish and details can be found in appendix. Along with the tasks, the game provides rewards & penalties or feedback on certain occasions, e.g. when a task is just completed, some infeasible commands are dispatched, etc. Due to the space limit, we defer details on tasks to Appendix B..
IMPLEMENTING CUISINEWORLD | 2309.09971#19 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 20 | @ When should the parameter-efficient training method be considered? As model size in- creases, it becomes necessary to consider using tuning methods that are more efficient than full- model fine-tuning. LoRA and QLoRA are well-known parameter-efficient tuning methods. As shown in Table 4, we report compute cost using GPU hours per node, because the unit can be equiv- alent to the price $13.63/hour (ND A100 v4 series) on Azure â. The total cost can be estimated by multiplying the #hours and #epochs.
In Table 4(a), we train both the 33B and 65B model with LoRA rank 8 and 64 for 1 epoch on the LLaVA-80K instruction-tuning dataset. For models with 33B parameters and above, as we increase the LoRA rank values, we notice an increase in both performance and cost until full-model tuning reaches its maximum performance for a speciï¬c model size. In the case of the 13B model, we ï¬nd that a rank of 64 can deliver comparable performance to full-model tuning. The cost is more related to the total number of parameters than the number of trainable parameters. The cost increase
# 7https://azure.microsoft.com/en-us/pricing/details/machine-learning/
4 | 2309.09958#20 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 20 | IMPLEMENTING CUISINEWORLD
The implementation of CUISINEWORLD mostly follows the spirit of Overcooked!, a renowned video game. Therefore we refer to many of its game mechanisms while simplifying some of them, e.g. we skip low-level control and assume all agent have access to all location at any time (detailed comparisons between CUISINEWORLD and the original video game can be found in appendix). Specifically, we crawled the rules and recipes from the community-contributed wiki1, streamlined them and made necessary modifications, ending up with the basic version of CUISINEWORLD com- prising 10 types of location (serving table, storage, and 8 different cooking tools), 27 types of ingredients, and 33 unique dishes. We group the dishes based on their difficulty to make (primarily the number of cooking tools involved) and design 12 game levels, which are further categorized into 4 classes: entry, simple, intermediate and advanced, with 3 levels each. Note that the recipes, dishes, and levels can be easily extended to allow more challenging tasks.
3.3 EVALUATION METRIC | 2309.09971#20 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 21 | 3.3 EVALUATION METRIC
Collaboration Score (CoS). We would like to evaluate to which extent the dispatcher (played by an LLM) can coordinate multiple agents into completing dish orders, across different scenarios. Similar to the original Overcooked! game, we are particularly interested in this question: Can the dispatcher still coordinate the agents into efficient collaborations with smaller Ïint, i.e. more dish orders are flooding in? Our hypothesis is, an ideal dispatcher should be capable of coordinating agents until there are way more tasks than the system can handle. Therefore, we introduce collaboration score CoS, defined as below:
M CoS 1 S- #¢completed task [rnc () M #¢completed task [Tint,(2)] + #failed task [Tint, (2) | i=1
where M is the total amount of Ïint we evaluate. Effectively, CoS is the average task completion rate across different Ïint conditions. In our default setting, we use M = 5. While the actual values of Ïint
# 1https://steamcommunity.com/sharedfiles/filedetails/?id=1769729191
5 | 2309.09971#21 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 22 | # 1https://steamcommunity.com/sharedfiles/filedetails/?id=1769729191
5
Planning Skills & Tool use CuisineWorldâ cc een Dispatcher Memory Current State Update Tool state agent state State Memory History Agent holdings eee Environment Pending Dishes State History environment H environment ' the returned tuples Timer feedback ; trajectory Agent State | GPT-4 Traject i 4 Action 4 en | Human Actions | Controller S Action H ; H Multi Action List of action types | Action os i ulti - Agents Validation âYP Prompt H History H 4 Pattern for the ; â ] d 5 > actions inseeSuons 4 Hi AY) cee a roma | Md Specific Extraction full knowledge ! i NPC Human Language 5 ; H Collaborators Player HIE MELTED HK one-shot H 1 | 2309.09971#22 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 23 | (b) Performance scores on MM-Bench. The skills to evaluate include logic reasoning (LR), attribute reason- ing (AR), relation reasoning (RR), ï¬ne-grained single-instance perception (FP-S), ï¬ne-grained cross-instance perception (FP-C), and coarse perception (CP).
Table 3: The performance to scale up model size, image resolution and data mixing.
LoRA Rank 7B Full 13B 64 Full 8 33B 64-QLoRA 64 Full 64 65B Full Performance â Time (GPU Hours per node) â # Trainable Parameters (B) â 65.9 1.3 7 70.1 2.1 0.26 70.1 2.3 13 70.3 4.62 0.06 71.6 4.68 0.49 71.8 4.79 0.49 72.0 5.80 33 72.2 9.17 0.81 72.3 13.50 65 | 2309.09958#23 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 23 | Figure 3: Our overview of our MINDAGENT architecture. Planning Skill & Tool Use: The game environment requires diverse planning skills and tool use to complete tasks. It emits related game information. This module also converts relevant game data into a structured text format so the LLMs can process it. LLM: The main workhorse of our infrastructure makes decisions, which is a dispatcher for the multi-agent system. Memory History: A storage utility that stores relevant information. Action Module, extract actions from text inputs and convert them into domain-specific language. Validate DSLs so they donât cause errors when executing.
depend on the game level, we ensure they elicit a wide range of difficulty including both extremely relaxed and intense scenarios.
In a word, CuisineWorld is a game that emulates a virtual kitchen, where several robots are com- manded to use various cooking tools and ingredients to prepare as many dish orders as possible in a limited period of time. To facilitate collaboration, new orders will keep flooding in while the exist- ing ones should be completed before expiration. Therefore, LLMs need to properly coordinate these robots to maximize overall productivity. CUISINEWORLD also offers game levels with a wide range of planning difficulty: dishes with different complexity (number of ingredients and tools involved), number of agents, order frequency and lifetime, etc, making it an ideal test bed for LLM-based multi-agent planning. | 2309.09971#23 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 24 | Table 4: The trade-off between performance and compute cost among different model sizes and traing methods on LLaVA-80K data. âFullâ indicates the full-model ï¬ne-tuning. âTimeâ is reported as the total GPU hours to ï¬nish 1 epoch training (running time à #GPUs) divided by 8 (#GPUs per node). All models are trained on LLaVA-80K data, results are obtained through averaging 3 repeated evaluation runs with same set up on LLaVA-Bench.
due to raising the LoRA rank for a given model size is signiï¬cantly smaller than the cost increase by enlarging model sizes. For example, increasing the LoRA rank from 8 to 64 nearly matches the performance as LoRA ï¬ne-tuning a 65B model with same rank, but only requires 50% of 65B modelâs training cost. In practice we ï¬nd that tuning 33B model provide a good trade-off between cost and performance. | 2309.09958#24 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 24 | # 4 MINDAGENT: INFRASTRUCTURE FOR GAMING AI
4.1 INFRASTRUCTURE
Our first foray into the challenging CUISINEWORLD benchmark is an interactive multi-agent plan- ning framework for LLMs: MINDAGENT It adopts a minimalist design for the purpose of demon- strating the emergent capacity of LLMs in scheduling and coordination, while also bringing in ex- ploratory prompting techniques that facilitate better planning and shed some light on future ap- proaches. Our infrastructure follows in-context learning. We will outline the key techniques below:
To facilitate in-context learning, our MINDAGENT infrastructure is composed of three primary com- ponents: the prompt, current state, and memory.
Within the prompt component, there are four distinct sub-components: recipes, general instructions, inference knowledge, and a one-shot demo.
Recipes. outline the hierarchical procedure for preparing various dishes at the given level. They specify the necessary ingredients for each intermediate or final product, the appropriate tools re- quired, and the expected outcome post-cooking.
6 | 2309.09971#24 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 25 | Different LoRA variations have similar performance, and QLoRA requires lower GPU memory cost and running-time cost than LoRA. When large models (e.g., 65B) are trained with DeepSpeed ZeRO2 mode, they can ï¬t into GPU with QLoRA, while yield the OOM issue with LoRA. In the experiments, we ï¬nd that the hyperparameters of LoRA have a large impact of performance:(i) Large learning rate and alpha value of LoRA improves the results signiï¬cantly. For example, With the same rank=64, we reduce the learning rate=2 à 10â5 and alpha=16, the performance decrease from 71.8 to 65.5 on LLaVA-Bench. (ii) Under the same setting, large ranks leads to little improve- ment. e.g., we increase the rank from 64 to 128 and 512, it improves from 65.5 to 66.1 and 68.1, respectively. | 2309.09958#25 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 25 | 6
Instructions. detail the foundational rules of CUISINEWORLD. These instructions delineate the array of actions agents can undertake within the game and enumerate the characteristics of every tool available in the current kitchen scenario. Moreover, they inform agents about the base ingredients retrievable from storage, as well as all potential intermediate products they can procure. Agents are also explicitly advised to remain cautious about feedback from the environment.
Inference Knowledge. houses insights and helpful hints for the agent. When utilized appropriately, these hints can guide agents to sidestep potential errors and enhance their collaborative efficiency.
One-shot Demo. presents a step-by-step demonstration of the preparation of a distinct dish, differ- ent from other dishes at the current level. This demonstration spans several time steps, each of which is incorporated as part of the prompt. The demonstration illustrates the major procedures for cook- ing one dish in CUISINEWORLD, including obtaining ingredients, putting ingredients into different tools, transporting intermediate ingredients, and delivering the final dish to the serving table. | 2309.09971#25 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 26 | @ A LMM with strong capabilities in both language and multimodal? We expand our eval- uation in two aspects: (i) MM-VET is added to measure the integrated multimodal capabilities o LMM; (ii) The pure language ability of LMM is measured using Vicuna-80 [16] and MMLU [6], where the former evaluates the instruct-following ability in real-world language tasks, the latter eva uates the multilingual multi-task language ability. The results are shown in Table 5, where all models are full-model fine-tuned.
Compared to Vicuna which initializes the LLM weights of LLaVA, it is surprising to observe that LLaVA, after being trained solely on multimodal instruction data, exhibits a comparable language capability. Mixing language instruction data can boost LLaVAâs multimodal ability, but not the lan- guage ability. This is partially attributed to the inclusion of complex reasoning questions, and long- form answers in LLaVA-Instruct-158K, which helps maintain the language capabilities of LLaVA.
5 | 2309.09958#26 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 26 | Current State. provides a snapshot of the prevailing observations from the environment. It en- compasses information such as the agentsâ locations, the objects currently in the agentsâ possession, the tools that are accessible within the environment, the ingredients present within each tool, and the tools that are actively in use. Moreover, it includes optional feedback from the environment, triggered when the agentsâ actions contravene the environment rulesâ for instance, when assigning two distinct actions to the same agent.
Memory History. archives the interaction history with the environment. Specifically, it chronicles the state of the environment and the state of the agents at every time step.
In addition to the prompt modules, additional modules are implemented to help interface between LLMs and CUISINEWORLD.
Action Extraction. employs a regular expression matching procedure to distill agent actions from the LLMâs textual output. This module is indispensable because, on occasion, the LLMâs output is not clean. The output contains information reflecting its internal thought processes. At times, the LLM might even issue apologies for prior missteps in reaction to environment feedback.
Action Validation. utilizes a look-ahead checking mechanism. This module parses the proposed actions, assessing their feasibility. Should an action be deemed inexecutable, an error message is promptly returned.
INFRASTRUCTURE MECHANISM | 2309.09971#26 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 27 | 5
Model Data Mix Multimodal Language LLaVA-Bench MM-VET Vicuna-80 MMLU Vicuna-13B LLaVA-13B - â - 70.1 - 32.5 79.9 79.6 55.8 55.0 Vicuna-33B LLaVA-33B LLaVA-33B - â â - 72.0 73.9 - 32.9 34.1 85.6 85.3 80.3 59.0 56.1 58.6 Vicuna-65B LLaVA-65B LLaVA-65B - â â - 72.3 74.2 - 35.5 36.4 83.2 84.5 82.6 62.5 62.6 62.2 LLaMA-2-70B-Chat LLaVA-70B - â - 69.8 - 35.4 84.7 81.3 63.1 65.1
Table 5: Performance on both multimodal and language capabilities. | 2309.09958#27 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 27 | INFRASTRUCTURE MECHANISM
Assuming a multi-agent system with a total of N agents, the system must complete a sequence of P different tasks. Each task has Mp different sub-tasks. Furthermore, the number and types of tasks are unknown at the beginning of the episode. The environment will sample a task for the agents to finish for a given interval. Then the agents need to complete the designated task along with other tasks in the task queue. In addition, each task has an expiration time. After the expiration time, the task will be marked as a failure. The objective of the multi-agent system is to finish as many tasks as possible and fail as fewer tasks as possible within a given time frame.
We aim to find valid and optimal task planning, scheduling, and allocations. We define qpim and cpim as quality and cost, respectively, for allocating agent i to work on the sub-task m for the p th task in the episode. Then the combined utility for the sub-task is:
Mim â Cpim,
upim = ââ. if agent i can execute sub-task m for the p th task in the episode otherwise
We define the assignment of sub-task m to agent i as
1, 0.
vpim = agent i is assigned to sub-task m for the p th task in the episode otherwise | 2309.09971#27 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 28 | Table 5: Performance on both multimodal and language capabilities.
We also train LLaVA-70B based on the LLaMA-2-70B-Chat checkpoint [15], and ï¬nd that mixed results on multimodal and language abilities. Interestingly, we improve LLaMA-2-70B-Chat by 2.4 points on MMLU, yielding an overall MMLU score of 65.1, which is the best performance for the 70B model size, according to [17] and the Chatbot Arena Leaderboard 8. To the best of our knowl- edge, this is the ï¬rst reported result which shows visual instructing tuning improve language ability of large-scale LMM.
# 4 Conclusions and Limitations | 2309.09958#28 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 28 | We define the assignment of sub-task m to agent i as
1, 0.
vpim = agent i is assigned to sub-task m for the p th task in the episode otherwise
The goal is to maximize the utility of the episode under a time constraint. Define the execution time for task m by agent i for the p th task in the episode as Ïpim, and the maximum time allowed to execute the task as Tmax, we can express the task decomposition and assignment problem as follows:
7
P N Mp arg max > > > UpimUpim (2) v p=1 i=l m=1
Subject to:
Vp Di dom TrimYpim â S Tina Yi vpim <1 Vm ⬠M,Vpe P Upim ⬠{0,1} Vie N,Vm e⬠M,Vp ⬠P
As pointed out by (Korsah et al., 2013), this problem cannot be solved in polynomial time. In this work, we tackle this problem by using large-language models. | 2309.09971#28 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 29 | We present an empirical study of scaling the language model size for LMM. Our main ï¬ndings are: (i) Scaling LMM consistently enhances model performance, resulting in signiï¬cant improvements in language capabilities, primarily due to the increased LLM model size. We leave it to future work how to scale the vision encoder to enhance the visual capabilities and improve model performance on vision recognition and understanding tasks. (ii) Parameter-efï¬cient methods such as LoRA/QLoRA are viable solutions to ï¬netune large-scale LLMs for a good performance-cost trade-off in some real-world settings with limited GPU memory. We observe that LoRA/QLoRAâs performance are comparable to that of ï¬ne-tuning the full model, establishing their effectiveness through signiï¬cant cost reduction in both model training and serving. (iii) Our study of training data curation reveals that properly selecting image resolutions and mixing multimodal-language data for model training can signiï¬cantly improve the performance of the resultant LMM. We also show for the ï¬rst time that visual instruction tuning can improve LMMâs language capability. Note that the | 2309.09958#29 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 29 | As pointed out by (Korsah et al., 2013), this problem cannot be solved in polynomial time. In this work, we tackle this problem by using large-language models.
Our prompt design choices try to help LLM system solve Equation 2. In practice, we reformu- late Equation 2 with qualities or rewards expressed in natural languages as environment feedback. For example, when the agent successfully collects an item, the environment emits a signal âcollect finish.â When the dispatcher assigns a different task to the same agent, the environment will emit a signal âagent ids cannot be the same.â As rewards are not immediately observable, we borrow sprites from temporal difference learning. We accumulate state-action history into memory history. Due to context length limits, itâs infeasible to fit the entire history into the context window. We select a fixed horizon history as a part of the prompt to guide the model performance. We further express the constraints of the system in natural language formats and repeat important constraints multiple times if necessary.
# 5 EXPERIMENTS AND RESULTS
Overview. We conduct extensive experiments in CUISINEWORLD. We first introduce the exper- iment settings and present an analysis of empirical results in CUISINEWORLD. Our experiments focus on addressing the following research questions:
# Q1: How efficiently can the model dispatch multiple agents? | 2309.09971#29 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 30 | the performance of the resultant LMM. We also show for the ï¬rst time that visual instruction tuning can improve LMMâs language capability. Note that the training datasets used in this study is small. So, our ï¬ndings are still preliminary. In future work, we will experiment using much larger datasets to investigate in detail whether and how different methods of training data selection and mixing can improve the quality of much larger LMM. | 2309.09958#30 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 30 | # Q1: How efficiently can the model dispatch multiple agents?
Q2: Can the model dispatch agents for dynamic, on-the-fly goals across different tasks?
Q3: How do various components of the input prompt influence the modelâs performance?
Q4: How do other LLMs perform compared to GPT-4?
Q5: To what extent can the existing methods collaborate with human users?
Q6: Whatâs the human perception of collaborating with numerous intelligent agents?
5.1 LLM SETTINGS
We perform experiments on CUISINEWORLD through OpenAI APIs and anthropic APIs. All GPT- 4 experiments are using gpt-4-0613 model, and all chat-GPT experiments are using gpt-3.5-turbo- 0613. For Llama 2 experiments, we use hugging face inference endpoints Llama-2-70b-chat-hf. We set the temperature for all experiments to 0.1 following (Wang et al., 2023a). We report the average results over three episodes.
5.2 EXPERIMENT SETTING I: LLMS DISPATCH MULTI-AGENTS (NPC) | 2309.09971#30 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 31 | # References
[1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â23736, 2022. 3
[2] Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openï¬amingo: An open- source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023. 3 | 2309.09958#31 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 31 | 5.2 EXPERIMENT SETTING I: LLMS DISPATCH MULTI-AGENTS (NPC)
Collaboration Efficiency (Q1, Q2). Figure 4 and Table 3, Table 4 and Table 5 reports the system performance under different settings. In particular, Table 3 reports the multi-agent collaboration results among two agents. Table 4 reports the multi-agent collaboration results among three agents, and Table 5 reports the multi-agent collaboration results among four agents. Figure 4 displays the collaboration efficiency curve.
As shown in Figure 4, across different task levels, more agents generally lead to better collaboration efficiencies. As the collaboration efficiency curve is generally higher with more agents.
Computing CoS by levels also reveals that more agents will lead to better collaboration efficiencies. As shown in the tables, the CoS score is the highest when there are two agents in two cases. The
8 | 2309.09971#31 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 32 | # 8https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
6
[3] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision- language models with instruction tuning, 2023. 3
[4] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efï¬cient ï¬ne- tuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023. 2
[5] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efï¬cient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 3 | 2309.09958#32 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 32 | 8
level_O level_1 level_2 â agent â agent success rate success rate success rate 04 â Aagent @0.4- o2 304 05 6 7 8 9 304 5 6 7 8 9 6 8 10 2 14 task interval task interval task interval level_3 level_4 level_5 1.0 1.0 ge ge ge 2 2 os- 2 Sos s @ 08 3 g 8 g oe g 08 o o a a 04 aot 0.4 ms , : : : : l l i l . 7 6 8 lo 06120~C8 6 8 10 2 14 8 10 12 14 16 18 20 task interval task interval task interval level_7 level_8 level_9 1.0 1.0 1.0 2 Loe 2 Bos 8 © 08- a 406 a go g $06 $06 o 04 S S S So4- a Bo2 a 0.4 - 6 8 10 2 6 8 10 2 14 7S 10.0 125 15.0 175 20.0 225 task interval task interval task interval level_10 level_11 level_12 1.0 1.0 success rate success rate success rate 8 io 12 14 16 18 8 lo 12 4 6 18 6 8 10 12 14 task interval task interval task interval
Figure 4: Collaboration Results on Different Tasks | 2309.09971#32 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 33 | [6] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. 5
[7] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2
[8] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. 3
[9] Chunyuan Li. Large multimodal models: Notes on CVPR 2023 tutorial. arXiv preprint arXiv:2306.14895, 2023. 1
[10] Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao. Multimodal foundation models: From specialists to general-purpose assistants. arXiv preprint, 2023. 1 | 2309.09958#33 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 33 | Figure 4: Collaboration Results on Different Tasks
CoS score is the highest when there are three agents in seven cases. The CoS score is the highest when there are four agents in three cases. The results also confirm that more agents will lead to higher collaboration efficiencies.
Findings. First, we observe that the system performance is generally better when there are more agents, indicating that LLM dispatcher can coordinate more agents to execute tasks more efficiently. Second, we observe that the system performance degrades with more agents in less demanding conditions, indicating that LLM dispatcher struggles when there are fewer tasks.
5.3 EXPERIMENT SETTING II: HUMAN AND MULTI-NPCS WITH LLMS
5.3.1 HUMAN DATA COLLECTION
Human Testing of Study Protocol. Before starting the experiment, a webpage introduction to the game is handed to the players. It contains rules and the basic controls of the game. Then we randomly assign the playing order. Participants can drop out of the testing at any time as they wise; in that case, their data will be discarded. The human evaluation interface is shown in Appendix D. | 2309.09971#33 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 34 | [11] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. 3
[12] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. 1, 2, 3
[13] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023. 4
[14] OpenAI. Gpt-4 technical report, 2023. 1 | 2309.09958#34 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 34 | Measurement. In the background, we collect the number of failed and successful tasks during the participantâs interaction with the game system. In addition, we record the entire action history of players and intelligent agents. Therefore, we can replay action histories for further analysis. After each episode, the participants must complete a survey about their engagement with the system on a 5-point likert chart.
Our objective measure is intended to evaluate the human AI teaming performance, and the subjective measure is designed to evaluate usersâ perceptions of the system.
5.3.2 EXPERIMENT II SETTING
We conducted a user study in our gaming environment that tries to answer Q5, Q6.
9 | 2309.09971#34 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 35 | [14] OpenAI. Gpt-4 technical report, 2023. 1
[15] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and ï¬ne-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. 2, 6
[16] Vicuna. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. https://vicuna.lmsys.org/, 2023. 2, 5
[17] Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023. 6 | 2309.09958#35 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 35 | 2-agent very simple simple intermediate advanced level 0 level 1 level 7 level 2 level 4 level 8 level 3 level 9 level 10 level 5 level 11 level 12 GPT4 Ïint,(1) GPT4 Ïint,(2) GPT4 Ïint,(3) GPT4 Ïint,(4) GPT4 Ïint,(5) CoS 18/54 18/31 18/25 18/18 18/18 0.727 18/56 17/34 19/25 18/19 17/17 0.706 12/31 10/23 10/17 12/12 12/12 0.682 14/34 13/26 16/18 11/14 11/13 0.687 12/30 12/22 11/18 11/12 11/13 0.664 3/30 9/22 6/16 7/11 9/9 0.504 10/26 10/17 11/13 12/12 11/11 0.764 7/20 8/11 6/8 8/8 4/5 0.725 7/23 6/12 7/10 9/9 7/7 0.701 6/23 5/13 8/10 6/7 8/8 0.661 6/21 4/14 9/9 8/9 8/8 0.692 10/36 8/21 8/17 11/12 9/12 0.559 | 2309.09971#35 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09958 | 36 | [18] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action, 2023. 3
[19] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabil- ities. arXiv preprint arXiv:2308.02490, 2023. 1, 3, 4
[20] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 1, 3
7
This figure "lora_loss.png" is available in "png" format from:
http://arxiv.org/ps/2309.09958v1 | 2309.09958#36 | An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models | Visual instruction tuning has recently shown encouraging progress with
open-source large multimodal models (LMM) such as LLaVA and MiniGPT-4. However,
most existing studies of open-source LMM are performed using models with 13B
parameters or smaller. In this paper we present an empirical study of scaling
LLaVA up to 33B and 65B/70B, and share our findings from our explorations in
image resolution, data mixing and parameter-efficient training methods such as
LoRA/QLoRA. These are evaluated by their impact on the multi-modal and language
capabilities when completing real-world tasks in the wild.
We find that scaling LMM consistently enhances model performance and improves
language capabilities, and performance of LoRA/QLoRA tuning of LMM are
comparable to the performance of full-model fine-tuning. Additionally, the
study highlights the importance of higher image resolutions and mixing
multimodal-language data to improve LMM performance, and visual instruction
tuning can sometimes improve LMM's pure language capability. We hope that this
study makes state-of-the-art LMM research at a larger scale more accessible,
thus helping establish stronger baselines for future research. Code and
checkpoints will be made public. | http://arxiv.org/pdf/2309.09958 | Yadong Lu, Chunyuan Li, Haotian Liu, Jianwei Yang, Jianfeng Gao, Yelong Shen | cs.CV, cs.CL | Released at LLaVA Model Zoo:
https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md | null | cs.CV | 20230918 | 20230918 | [
{
"id": "2307.06281"
},
{
"id": "2305.03726"
},
{
"id": "2306.14895"
},
{
"id": "2009.03300"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "2306.04751"
},
{
"id": "2305.14314"
},
{
"id": "2304.15010"
},
{
"id": "2307.09288"
},
{
"id": "2308.02490"
},
{
"id": "2308.01390"
}
] |
2309.09971 | 38 | 3-agent very simple simple intermediate advanced level 0 level 1 level 7 level 2 level 4 level 8 level 3 level 9 level 10 level 5 level 11 level 12 GPT4 Ïint,(1) GPT4 Ïint,(2) GPT4 Ïint,(3) GPT4 Ïint,(4) GPT4 Ïint,(5) CoS 21/55 20/31 22/25 22/22 20/20 0.781 24/55 25/33 21/26 20/21 15/16 0.778 16/33 11/22 17/17 14/14 11/12 0.780 17/33 4/24 11/20 9/13 10/14 0.528 9/28 13/24 9/17 7/10 10/11 0.600 6/32 7/21 4/15 6/10 8/9 0.455 12/25 14/20 13/14 10/10 12/12 0.822 5/20 9/12 8/8 6/7 6/6 0.771 8/21 9/13 12/12 10/10 8/8 0.815 7/22 7/14 7/7 5/8 5/5 0.689 7/22 8/14 9/10 7/8 8/8 0.733 9/26 10/23 10/16 11/13 6/10 0.570 Average | 2309.09971#38 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 41 | 4-agent very simple simple intermediate advanced level 0 level 1 level 7 level 2 level 4 level 8 level 3 level 9 level 10 level 5 level 11 level 12 GPT4 Ïint,(1) GPT4 Ïint,(2) GPT4 Ïint,(3) GPT4 Ïint,(4) GPT4 Ïint,(5) CoS 22/54 24/32 23/25 22/22 14/18 0.771 18/55 21/33 23/26 21/22 20/20 0.761 17/34 14/24 13/18 14/14 14/14 0.761 13/34 14/25 11/19 7/15 7/13 0.505 8/28 12/24 10/17 10/13 9/11 0.592 9/33 11/22 11/17 10/12 7/8 0.626 16/27 16/19 15/17 12/13 12/12 0.848 5/20 7/12 8/9 9/9 5/5 0.744 8/23 9/15 11/11 10/10 7/7 0.790 5/22 7/14 7/8 6/7 6/6 0.692 8/22 6/12 10/11 8/8 3/5 0.675 8/35 12/23 9/17 9/13 7/10 | 2309.09971#41 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 43 | Table 5: 4 agents performance on different tasks
The user study evaluates the LLM dispatcherâs capabilities of collaborating with humans, where participants are collaborating with 1,2,3 agents or working alone on the virtual cooking tasks. We consider the most general setting, where the LLM works on the unseen task, level 3.
5.3.3 EXPERIMENT II DESIGN
Hypotheses. The user study tests the following hypotheses:
⢠H1: Task productivity. Participants have higher productivity if collaborating with AI agents.
⢠H2: Task productivity with more agents. Participants have higher productivity if collaborating with more AI agents.
⢠H3: Perception of the robot. Participants would have higher perceived task efficiency and have more fun playing the game due to collaboration.
Manipulated Variables. We use a within-subject design for our experiment. In particular, every user tries to finish the task by himself or collaborates with different numbers of robots with varying degrees of competency. We randomize the order of the treatment to mitigate practice effects, fatigue effects, and carryover effects.
⢠Single agent: Participants work on the task by themselves.
⢠LLM powered multi-agent system: Participants collaborate with the multi-agent system pow- ered by LLM.
⢠Random agent: Random agents execute random actions from a pool of valid actions. Participants collaborate with random agents. | 2309.09971#43 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 44 | ⢠Random agent: Random agents execute random actions from a pool of valid actions. Participants collaborate with random agents.
Main Results. We recruited 12 subjects for our study. Among them, there are two females and 10 males.
We use ANOVA to test the effects of different experimental conditions on collaboration performance and subjective perception of the AI agents. Tukey HSD tests are conducted on all possible pairs of experimental conditions.
10
âoverall success rate Human APRSent+ Humans yams +Hugngom ABeNS:
Perceived enjoyment Human APRSent+ Humans yams +Hugngom ABeNS:
Perceived more fun 1 | Human APRSent+ Humans yams +Hugngom ABeNS:
(a) Collaboration score We can tell that the collaboration score is higher if more agents are collab- orating with human players, even though the difference is not signif- icant.
(b) Perceived Enjoyment Humans enjoy the game more if they col- laborate with the right number of agents
(c) Perceived more fun due to col- laboration. Players enjoy the game more because of collaborating with competent agents.
Perceived assisting eeu ges HUMER gg tiual gent gents gents! 1a 2h Pre
Perceived dependability erHUAR gs tHUMOR, | ogeetuman gene gents! gents! 1A aM 3M | 2309.09971#44 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 45 | Perceived dependability erHUAR gs tHUMOR, | ogeetuman gene gents! gents! 1A aM 3M
Perceived predictability tt HUTEP ts HUAN age HUAN aasent 2 agents HUMP agents
(d) Perceived Assisting. There is no significant difference in terms of human perceptions of helpful- ness when collaborating with more agents, even though the task suc- cess rate is higher.
(e) Perceived dependability. When collaborating with more agents, players depend on the agents more.
(f) Perceived Predictability. There is no difference in terms of the predictability of agentsâ behav- iors when collaborating with more agents.
Perceived productivity ir) vwumas ASRSent Humes tyme + nam Agen
en Perceived trust seenuman eHUM2R peau Lagent HUET agentsHUMAN agents
(g) Perceived productivity. Play- ers think collaborating with AI agents will improve productivity. (h) Perceived Trust. There is no difference in terms of trust when collaborating with more agents.
Figure 5: Human Evaluations | 2309.09971#45 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 46 | Figure 5: Human Evaluations
Findings. We find significant effects on team collaboration success rate F (4, 55) = 28.11, p < 0.001. Post-hoc comparisons using the Tukey HSD tests revealed that the team of the player with LLM agents achieves a higher success rate than a human working alone, p < 0.001 across different numbers of agents, confirming H1. Even though the success rate is generally higher when collab- orating with more agents, there is no significant effect compared with collaborating with one agent, collaborating with two agents p = 0.774, or collaborating with three agents p = 0.231. We observe that human players have more fun playing the game when collaborating with LLM-powered intel- ligent agents than playing alone, p = 0.0126. Players feel that collaboration with intelligent agents leads to higher productivity, p = 0.0104, thus confirming H3.
In addition, when playing with intelligent agents, human players will take their actions based on other playersâ actions p = 0.00266. Human players also found that intelligent agents are more predictable compared with random agents p < 0.001.
Further insights from player feedback highlighted an intriguing trade-off: while more agents im- proved overall task success rates, it reduced the gameâs enjoyment. Often, players felt sidelined and less involved. Thus, game developers should adjust AI performance to maintain player engagement
11 | 2309.09971#46 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 47 | 11
and fun. As indicated by Yuan et al. (2022), aligning human values with AIs might be a promising way to solve this problem.
5.4 VISUALING âCUISINEWORLDâ
To implement CUISINEWORLD into a real game system, we built on top of Gao et al. (2020). In our game, as visually depicted in Figure 6, players are given the opportunity to engage in collaborative interactions with NPCs. In this game, human playersâ actions can be obtained from an inverse dynamic model by checking preconditions and post-effects. This introduces a unique dynamic to the gameplay, enabling users to experience a more immersive cooperative environment. Additionally, the gameâs interface is versatile, allowing players multiple ways to interact within the game world. They can either use a standard keyboard setup, which is more conventional and likely familiar to most PC gamers, or they can immerse themselves even further using a Virtual Reality (VR) device. This VR functionality ensures a more tactile and realistic interaction, as players can physically move, gesture, and engage with the NPCs and other in-game elements in a 3D environment.
t n e g a - i t l u M t n e g a - n a m u H n o i t c a r e t n I R V | 2309.09971#47 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 49 | Figure 6: The top two images show a multi-agent collaboration example in CuisineWorld, the three agents are preparing a mixed juice together. The middle two images show a human player as the head chef instructing the agents to cook mixed juice. The bottom two images show a human player collaborating with collaborative agents in VR.
6 ANALYSIS AND EMERGENT GAMING ABILITIES
6.1 ABLATION STUDY FOR MULTI-AGENTS
Study on the Prompt Components Q3. In Table 7, we elucidate the performance of LLM dis- patchers with certain components of the prompt omitted. Details about prompt can be found in Appendix Figure 9 and Figure 8. Specifically, for these tests, we excluded individual components like inference knowledge, reduced the prompt example to a mere two steps instead of the complete demonstration, and evaluated the model without environment feedback. For context, our principal experiments, varying in the number of agents, incorporate a one-shot example for the correspond12
ing number of agents. Our ablation studies further probe how varying the number of agents can influence model performance, with details in Table 8. | 2309.09971#49 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 50 | ing number of agents. Our ablation studies further probe how varying the number of agents can influence model performance, with details in Table 8.
Findings: From Table 7, a significant drop in performance is observed when environment feedback is excluded, underscoring its pivotal role in the efficacy of the LLM dispatcher. Replaying action sequences reveals that, without feedback, the LLM dispatcher tends to repeat mistakes and gets stuck in specific states for prolonged durations. Another key takeaway is that a succinct two-step demonstration of input and output format can still achieve commendable performance for unseen tasks with dynamic objectives. Notably, in these two-step instances, thereâs no explicit guide to finish any tasks. Yet, the model doesnât merely complete the task but continually performs additional tasks within the same episode. Furthermore, we also observe that integrating human-crafted inference knowledge bolsters the LLM dispatcherâs performance. Lastly, even with few-shot demonstrations involving fewer agents, the LLM dispatcher retains satisfactory performance as shown in Table 8.
Study on Other LLMsâ Performance Q4. To study how other LLMs perform on our tasks, we tested the collaboration performance of GPT-3.5, Claude-2 and LLaMA in Table 6. For a fair com- parison, all tests employed identical prompt inputs. | 2309.09971#50 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 51 | Findings: We observe that while other LLMs tend to underperform, models such as Claude-2 still manage to complete the task to a considerable extent.
6.2 EMERGING CAPABILITIES
Across our experiments, we observe the following emergent properties under our MINDAGENT framework.
Emergent Collaboration Tasks Understanding. As shown in Table 7, especially in the few-step ablation entries, GPT-4 exhibits its proficiency even when not provided with a full demonstration for specific tasks. To clarify, a âfull few-shot demoâ typically refers to a comprehensive demonstration of a task, detailing each step and procedure involved. In contrast, we use provide GPT-4 with only a partial demonstration or a glimpse of the task only executing two steps.
Yet, despite this limited input, GPT-4âs performance is remarkable. This underscores GPT-4âs im- pressive emergent zero-shot multi-agent planning capabilities. Beyond simply completing unseen tasks, GPT-4 also demonstrates adaptability by dynamically prioritizing multiple different tasks as they arise, emphasizing its emergent multi-task, on-the-fly planning skills. | 2309.09971#51 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 53 | 2 agent 3 agent 4 agent GPT-4 Claude-2 LLaMA ChatGPT GPT-4 Claude-2 LLaMA ChatGPT GPT-4 Claude-2 LLaMA ChatGPT Ïint,(1) Ïint,(2) Ïint,(3) Ïint,(4) Ïint,(5) CoS 10/26 10/17 11/18 11/13 11/11 0.686 3/24 3/16 3/12 3/9 4/6 0.3125 0 0 0 0 0 0 0/24 0/15 0/12 0/9 0/6 0 12/25 14/20 13/14 10/10 12/12 0.822 5/26 4/16 3/12 5/11 5/7 0.372 0 0 0 0 0 0 0/24 0/15 0/12 0/9 0/6 0 16/27 16/19 15/17 12/13 12/12 0.848 9/25 4/15 4/12 6/11 6/7 0.473 0 0 0 0 0 0 0/24 0/15 0/12 0/9 0/6 0
Table 6: Performance of Other LLMs on Level 3 | 2309.09971#53 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 54 | Table 6: Performance of Other LLMs on Level 3
Ïint,(1) Ïint,(2) Ïint,(3) Ïint,(4) Ïint,(5) CoS 10/26 10/17 11/13 12/12 11/11 0.764 8/26 11/19 11/13 9/11 10/10 0.710 8/25 9/17 10/12 8/9 9/9 0.714 4/25 4/17 4/12 1/9 5/7 0.311
2 agent GPT-4 GPT-4 w/ few-step GPT-4 w/o inference knowledge GPT-4 w/o feedback
Table 7: Additional Ablation
13
level 3 4agent using 4agent module 4agent using 2agent module 3agent using 3agent module GPT4 Ïint,(1) GPT4 Ïint,(2) GPT4 Ïint,(3) GPT4 Ïint,(4) GPT4 Ïint,(5) CoS 16/27 16/19 15/17 12/13 12/12 0.848 14/27 16/20 15/16 13/13 12/12 0.851 12/25 14/20 13/14 10/10 12/12 0.822 11/25 11/19 12/14 12/12 11/11 0.775 | 2309.09971#54 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 55 | Table 8: Using different numbers of agent demos
# 7 NOVEL GAME ADAPTATION
In line with our ongoing efforts to create collaborative, in-game, multi-agent systems, we ventured beyond CuisineWorld and made strides in integrating our infrastructure into the widely popular sandbox game, Minecraft. In this new adaptation, we designed several unique cooking tasks where two in-game agents, Alex and Steve, are assigned the responsibility of cooking various types of meat as shown in Figure 7. After cooking, agents need to deposit the items into a chest. More details can be found in Appendix C. The experiment results are presented in Table 9.
We define the following actions for the multi-agent system in our Minecraft game: 1) goto(agent, location); 2) killMob(agent, mobType); 3) mineBlock(agent, blockType); 4) putFuelFurnace(agent, fuelType), to put the item from agentâs in- ventory to the furnaceâs bottom slot. 5) putItemFurnace(agent, itemType), to put the item from agentâs inventory to the furnaceâs top slot; 6) takeOutFurnace(agent), take out the cooked item from the furnace 7) putInChest(agent, itemType) ; | 2309.09971#55 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 56 | The state space in Minecraft contains the following: 1) nearby blocks for each agent 2) nearby entities for each agent. 3) each agentâs inventory 4) items inside the furnace 5) items inside the chest. 6) human playerâs inventory if a human player is involved.
To ensure reproducibility, we modify the game mechanism. A killed mob will respawn nearby, and a mined block will also respawn nearby.
The empirical data we collected from these game sessions provided us with compelling evidence that the multi-agent collaboration infrastructure weâve developed has the robustness to be extrapolated and adapted across multiple distinct games, paving the way for broader applications in the gaming industry.
Going a step further, we bridged the gap between human players and in-game (NPC) agents by inte- grating Microsoftâs Azure speech-to-text API into the Minecraft environment. This addition allows human players to communicate and collaborate with in-game NPC agents using voice chat. Human players can express their intents and desired goals to NPCs in real-time through voice chat. This real-time vocal interaction enriches the gameplay experience, fostering a deeper level of immersion and synergy between human players and AI agents. Moreover, this integration opens the door for research into the efficacy of voice-assisted AI learning and how real-world human interactions can shape AI behavior in virtual domains. | 2309.09971#56 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 57 | In the case of the human player chatting with the multi-agent system, the prompt contains additional human instructions and human dialog history components.
In addition, by integrating Minecraft VR mode with our infrastructure, we can bring the player interactive experiences to the next level.
GPT-4 minecraft Performance Ïint,(1) 0.195 Ïint,(2) 0.381 Ïint,(3) 0.704 Ïint,(4) 0.792 Ïint,(5) 0.833 CoS 0.581
Table 9: Performance of our framework in Minecraft
14
t n e g a - i t l u M t n e g a - n a m u H n o i t c a r e t n I R V | 2309.09971#57 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 58 | Figure 7: The top two images show a multi-agent collaboration example in Minecraft. In the left image, Alex and Steve are killing different animals, and in the right image, Alex and Steve are cooking meat in a furnace together. The middle two images show a human player instructing the agents to perform certain actions. The bottom two images show a human player collaborating with agents in VR.
# 8 CONCLUSION
In this paper, we presented MINDAGENT, an infrastructure for multi-agent collaboration through LLMs across multiple gaming domains. We investigated the multi-agent planning capabilities of MINDAGENT, and we deployed our infrastructure into real-world video games to demonstrate its effectiveness for multi-agent collaboration and human-AI collaboration. Beyond its practical appli- cations, we hope that our endeavor serves as a beacon, guiding the development of future gaming systems where human-AI collaboration is seamless and intuitive. Furthermore, we are optimistic that our insights and findings might catalyze innovations in crafting games that are not only techno- logically advanced but also significantly more engaging and enjoyable for players.
# ACKNOWLEDGMENTS | 2309.09971#58 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 59 | # ACKNOWLEDGMENTS
We are especially grateful to Johannes Gehrke, Ryen White, Haiyan Zhang, Kareem Choudhry for their enormous advice, support and encouragement of the work. We appreciate Katja Hofmann, Andrzej Banburski-Fahey, Jianwei Yang, Michel Galley, Nebojsa Jojic, Bill Dolan for the early in- sightful discussions, suggestions and comments. The authors gratefully acknowledge Adrian Brown from X-Box team for his discussion, feedback and pointers to the modeling generation and litera- ture. We thank Rohan Taori, Janardhan Kulkarni, Ziheng Zhou, Yu Wang, Eloi Moliner Juanpere, Xiaofeng Gao, Collin Huang, Xiaodong Yu, and Shuwen Qiu for their help on the human experiment setup.
15
REFERENCES | 2309.09971#59 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 60 | 15
REFERENCES
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can and not as i say: Grounding language in robotic affordances. In arXiv preprint arXiv:2204.01691, 2022. 3 | 2309.09971#60 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 61 | Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. Advances in Neural Information Processing Systems, 35:24639â24654, 2022. 3
Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. Human-level play in the game of diplomacy by com- bining language models with strategic reasoning. Science, 378(6624):1067â1074, 2022. 4
Andrew Blair-Stanek, Nils Holzenberger, and Benjamin Van Durme. Can gpt-3 perform statutory reasoning? arXiv preprint arXiv:2302.06100, 2023. 2
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. 2 | 2309.09971#61 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 62 | S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Ka- mar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. 2
Micah Carroll, Rohin Shah, Mark K Ho, Tom Griffiths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. On the utility of learning about humans for human-ai coordination. Advances in neural information processing systems, 32, 2019. 3, 4
Jonathan H Choi, Kristin E Hickman, Amy Monahan, and Daniel Schwarcz. Chatgpt goes to law school. Available at SSRN, 2023. 2 | 2309.09971#62 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 63 | Jonathan H Choi, Kristin E Hickman, Amy Monahan, and Daniel Schwarcz. Chatgpt goes to law school. Available at SSRN, 2023. 2
Marc-Alexandre CËot´e, Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, Held in Con- junction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers 7, pp. 41â75. Springer, 2019. 4
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023. 4 | 2309.09971#63 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 64 | Xiaofeng Gao, Ran Gong, Yizhou Zhao, Shu Wang, Tianmin Shu, and Song-Chun Zhu. Joint mind modeling for explanation generation in complex human-robot collaborative tasks. In 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN), pp. 1119â1126. IEEE, 2020. 12
Xiaofeng Gao, Qiaozi Gao, Ran Gong, Kaixiang Lin, Govind Thattai, and Gaurav S Sukhatme. Dialfred: Dialogue-enabled agents for embodied instruction following. IEEE Robotics and Au- tomation Letters, 7(4):10049â10056, 2022. 4
Qiuyuan Huang, Jae Sung Park, Abhinav Gupta, Paul Bennett, Ran Gong, Subhojit Som, Baolin Peng, Owais Khan Mohammed, Chris Pal, Yejin Choi, et al. Ark: Augmented reality with knowl- edge interactive emergent ability. arXiv preprint arXiv:2305.00970, 2023. 2
16 | 2309.09971#64 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 65 | 16
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero- shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 9118â9147. PMLR, 17â23 Jul 2022a. URL https://proceedings. mlr.press/v162/huang22a.html. 3
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, and Brian Ichter. Inner monologue: Embodied reasoning through planning with language models. In arXiv preprint arXiv:2207.05608, 2022b. 3
Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023. 2 | 2309.09971#65 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 66 | Unnat Jain, Luca Weihs, Eric Kolve, Mohammad Rastegari, Svetlana Lazebnik, Ali Farhadi, Alexan- der G Schwing, and Aniruddha Kembhavi. Two body problem: Collaborative visual task comple- tion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6689â6699, 2019. 3
Katharina Jeblick, Balthasar Schachtner, Jakob Dexl, Andreas Mittermeier, Anna Theresa St¨uber, Johanna Topalis, Tobias Weber, Philipp Wesp, Bastian Sabel, Jens Ricke, et al. Chatgpt makes medicine easy to swallow: An exploratory case study on simplified radiology reports. arXiv preprint arXiv:2212.14882, 2022. 2
G Ayorkor Korsah, Anthony Stentz, and M Bernardine Dias. A comprehensive taxonomy for multi- robot task allocation. The International Journal of Robotics Research, 32(12):1495â1512, 2013. 8 | 2309.09971#66 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 67 | Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In arXiv preprint arXiv:2209.07753, 2022. 2, 3
Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688, 2023. 4
Xinzhu Liu, Xinghang Li, Di Guo, Sinan Tan, Huaping Liu, and Fuchun Sun. Embodied multi-agent task planning from ambiguous instruction. Proceedings of robotics: science and systems, New York City, NY, USA, pp. 1â14, 2022. 4
Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi- agent actor-critic for mixed cooperative-competitive environments. Advances in neural informa- tion processing systems, 30, 2017. 3 | 2309.09971#67 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 68 | Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Are- nas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern machines. arXiv preprint arXiv:2307.04721, 2023. 2
John J Nay. Law informs code: A legal informatics approach to aligning artificial intelligence with humans. Nw. J. Tech. & Intell. Prop., 20:309, 2022. 2
Oded Nov, Nina Singh, and Devin M Mann. Putting chatgptâs medical advice to the (turing) test. medRxiv, pp. 2023â01, 2023. 2
Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. Teach: Task-driven In Proceedings of the AAAI Conference on Artificial Intelligence, embodied agents that chat. volume 36, pp. 2017â2025, 2022. 4 | 2309.09971#68 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 69 | Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. 3, 4
17
Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Yuan-Hong Liao, Joshua B Tenenbaum, Sanja Fidler, and Antonio Torralba. Watch-and-help: A challenge for social perception and human-ai collaboration. arXiv preprint arXiv:2010.09890, 2020. 3, 4
Tabish Rashid, Mikayel Samvelyan, Christian Schroeder De Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Monotonic value function factorisation for deep multi-agent reinforcement learning. The Journal of Machine Learning Research, 21(1):7234â7284, 2020. 3
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre CËot´e, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020. 4 | 2309.09971#69 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 70 | Peter Stone and Manuela Veloso. Multiagent systems: A survey from a machine learning perspec- tive. Autonomous Robots, 8:345â383, 2000. 2
Alane Suhr, Claudia Yan, Charlotte Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. Executing instructions in situated collaborative interactions. arXiv preprint arXiv:1910.03655, 2019. 4
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt¨aschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. arXiv preprint arXiv:1903.03094, 2019. 4
Yanming Wan, Jiayuan Mao, and Josh Tenenbaum. Handmethat: Human-robot communication in physical and social environments. Advances in Neural Information Processing Systems, 35: 12014â12026, 2022. 3, 4 | 2309.09971#70 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 71 | Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023a. 2, 3, 8
Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023b. 2, 3
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. 2
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022. 2 | 2309.09971#71 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 72 | Kailai Yang, Shaoxiong Ji, Tianlin Zhang, Qianqian Xie, and Sophia Ananiadou. On the evalu- ations of chatgpt and emotion-enhanced prompting for mental health analysis. arXiv preprint arXiv:2304.03347, 2023. 2
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. 2, 3, 4
Luyao Yuan, Xiaofeng Gao, Zilong Zheng, Mark Edmonds, Ying Nian Wu, Federico Rossano, Hongjing Lu, Yixin Zhu, and Song-Chun Zhu. In situ bidirectional human-robot value alignment. Science robotics, 7(68):eabm4183, 2022. 12
Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, et al. Proagent: Building proactive cooperative ai with large language models. arXiv preprint arXiv:2308.11339, 2023a. 3 | 2309.09971#72 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 73 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tian- min Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint arXiv:2307.02485, 2023b. 3
18
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. Solving math word problem via cooperative reasoning induced language models. arXiv preprint arXiv:2210.16257, 2022. 2
19
# APPENDIX
# A PROMPT EXAMPLES
We provide some prompt examples for CuisineWorld. Figure 8 shows an example of the system prompt info. Figure 9 shows an example of a partial demonstration. | 2309.09971#73 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 74 | The available actions are : 1) goto: goto a tool location 2) get: get some object from a tool 3) put: put some abject into a tool &) activate: activate the tool to cook all ingredients inside the tool into a different tools S) noop: not performing any actions Sonetimes the system will give you error messages. Please consider these error messages when executing actions. You need to specify action for all of the agents, **except humanes. They all have different agent numbers. Do not assign actions to the same agent more than once. When the tools reach its capacity, you need to take stuff out. Otherwise, you cannot put items inside. when you are holding objects, you cannot get any more objects. When you are holding objects, you cannot activate tools. Afer you cooked a required dish, you need to put it into the servingtable. You can only pick up objects from the tool location, if you are located at the tool location. When you activate any tools, make sure a11 the items inside the tool are respecting the recipes. Otherwi *e* You should mix salad in the mixer. To make salad you should chop veggies first. *** =** If the tool is occupied, indicated by the occupy({) predicate, you cannot get objects from it or put objects into it. ++» *** The | 2309.09971#74 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 75 | first. *** =** If the tool is occupied, indicated by the occupy({) predicate, you cannot get objects from it or put objects into it. ++» *** The food orders are keep coming. You should finish as many dishes as possible and finish every dish as soon as possible. Please deliver the order to the serveringtable when it is finished. *** ex The dish will expire after the lifetime reaches @ and it's not at the serveringtable. Please avoid this. *«« Here are the recipes: , you will cook waste. Avoid waste at all cost. Cook porkMeatcake at: â- location: blender â with ingredients: pork, flour, Cook salmonSashimi at: ~~ location: chopboard -- with ingredients: salmon, Cook tunaSashimi at: -â- location: chopboard == with ingredients: tuna, Cook mixedSashimi at: â- location: mixer -- with ingredients: selmonSashini, tunaSashimi, The following objects are available: â-1) salmonSashini â-2) tuna --3) mixedSashimi ~-4) tunaSashini --5) porkMeatcake --6) salmon --7) flour ~-8) pork The objecsts are cooked | 2309.09971#75 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 76 | ~-4) tunaSashini --5) porkMeatcake --6) salmon --7) flour ~-8) pork The objecsts are cooked using tools or are just base ingredients. Anong them, the following are base ingredients: â-1) tuna ~-2) salmon â-3) flour â-4) pork You can only obtain base ingredients from the storage initially. Additional rules: You can place up to infinite item into the storaged You can place up to infinite item into the storageé You can place up to infinite item into the servingtable@ You can place up to infinite item into the servingtableé You can place up to 1 item into the chopboard6d You can place up to 1 item into the chopboardd You can place up to 1 item into the chopboardi You can place up to 1 item into the chopboardi You can place up to item into the mixerd You can place up to 5 item into the mixerd You can place up to 5 item into the mixer1 You can place up to item into the mixeri z* Only #* the following tools are available: storage®, servingtable@, chopboard@, chopboardi, mixerd, mixer1, You cannot pick up these tools. You can only use those tools at the corresponding | 2309.09971#76 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 78 | Figure 8: The MINDAGENT system prompt example.
There Goal: porkMeat at(agent®, servingtable@) at(agenti, servingtable@) hold(agent@, None) agenti, None) je(storaged, None) de(blender®, None) goto_agent@_storaged goto_agenti1_storaged =e Goal: porkMeatcake t=1 state: at(agent®, storage®) at(agenti, storaged) hold(agent@, None) hold(agenti, None) inside(storage®, None) inside(blender®, None) inside(chopboard@, None) inside(servingtable®, None) ~action: ââ* Goal: porkMeatcake t=2 -Sstate: at(agent®, storaged) at (agen storage®) hold(agent@, flour) hold(agenti, pork) inside(storaged, None) inside(blender®, None) e(chopboard@, None) inside(chopboard1, None) ( @, None) vingtable®, None) goto_agent@_blender® goto_agenti_blender® ==
Figure 9: The MINDAGENT system partial one-shot demo example.
20
# B TASK DETAILS IN CUISINEWORLD | 2309.09971#78 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 79 | Figure 9: The MINDAGENT system partial one-shot demo example.
20
# B TASK DETAILS IN CUISINEWORLD
Here we visualize different task graphs in CUISINEWORLD. In CUISINEWORLD, we provide tasks of different complexities to holistically evaluate the multi-agent systemâs performance. In addition, the environment is highly customizable and extendable. Users only need to modify the JSON files to add more tasks or modify existing tasks.
B.1 LEVEL 0
Salmon Meatcake
Figure 10: Salmon Meatcake
B.2 LEVEL 1
Lamb Meatcake
Flour Salmon Meatcake Lamb Meatcake (a) Salmon Meatcake (b) Lamb Meatcake (c) Lobster Meatcake Lobster Meatcake
Salmon Meatcake
Flour Lobster Meatcake
# (a) Salmon Meatcake
# (b) Lamb Meatcake
# (c) Lobster Meatcake
21
B.3 LEVEL 2
( chopboard Tuna Sashimi
© Chopboard ) @eereca ) ( chopboard ( chopboard 72 a Salmon Sashimi Tuna Sashimi Mixed Sashimi
( chopboard Salmon Sashimi
© Chopboard ) @eereca ) 72 a Mixed Sashimi
# (a) Salmon Sashimi
(b) Tuna Sashimi
(c) MixedSashimi
B.4 LEVEL 3 | 2309.09971#79 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 80 | # (a) Salmon Sashimi
(b) Tuna Sashimi
(c) MixedSashimi
B.4 LEVEL 3
Rice Chopboard ( Pot Salmon Sashimi Cooked Rice Salmon Sushi
Chopboard ( Pot Tuna Sashimi Cooked Rice Tuna Sushi
(a) Salmon Sushi
(b) Tuna Sushi
22
B.5 LEVEL 4
( chopboard Tomato Slice Tomato Salad
Chopboard Lettuce Slice GD Lettuce Salad
(a) Tomato Salad (b) Lettuce Salad
7 = Chopboard Chopboard wos Tomato Slice Lettuce Slice Nu? Mixer q Tomato Lettuce Salad
TT Ty T Tomato Slice Cucumber Slice er, Mixer | âTomato Cucumber Salad
# (c) Tomato Lettuce Salad
(d) Tomato Cucumber Salad
B.6 LEVEL 5
i. (Ghopboard ) - Tomato Slice ( an Cooked Pasta Sautéed Tomato NU? { Tomato Pasta Pot ) ; | 2309.09971#80 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 81 | B.6 LEVEL 5
i. (Ghopboard ) - Tomato Slice ( an Cooked Pasta Sautéed Tomato NU? { Tomato Pasta Pot ) ;
i. (Ghopboard ) - Tomato Slice ( an Cooked Pasta Sautéed Tomato NU? { Tomato Pasta Pot ) ; Tomato Pasta T T T L I Pot ( : RD ) Pan : Cooked Pasta Cooked Pasta âSautéed Pork \ Mixer) {Mer Neatâ | | Beef Pasta Pork Pasta Beef Pasta Pork Pasta
T Pot ( : RD ) : Cooked Pasta \ Mixer) | Beef Pasta
T T L I Pan Cooked Pasta âSautéed Pork {Mer Neatâ | Pork Pasta
# (a) Tomato Pasta
(b) Beef Pasta
# (c) Pork Pasta
23
B.7 LEVEL 6
* Blender âAr | Hawaiian Pizza
(a) pepperoniPizza (b) hawaiianPizza (c) chickenPizza
= Blender as SS Chicken | Chicken Pizza
- Blender ane oven | Pepperoni Pizza
B.8 LEVEL 7
Leek Onion Potato Leek Soup
wea ! Onion Potato Carrot Soup Leek Onion Potato Leek Soup Broccoli Cheese Le Pot f Onion Broccoli Cheese Soup
wea ! Onion Potato Carrot Soup | 2309.09971#81 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 82 | wea ! Onion Potato Carrot Soup Leek Onion Potato Leek Soup Broccoli Cheese Le Pot f Onion Broccoli Cheese Soup
wea ! Onion Potato Carrot Soup
Broccoli Cheese Le Pot f Onion Broccoli Cheese Soup
(a) onionPotatoCarrotSoup
(b) onionPotatoLeekSoup
(c) onionBroccoliCheeseSoup
# B.9 LEVEL 8
Beef ( ( Blender \ Flour Ground Beef Steamer Beef Dumpling
Pork \ Blender ) | Flour Ground Pork Pork Dumpling
Beef Pork ( \ ( Blender Blender ) \ Flour Ground Beef | Flour Ground Pork Steamer Beef Dumpling (a) Beef Dumpling Pork Dumpling (b) Pork Dumpling ( Blender ) | Flour Ground Salmon ( Steamer ) Salmon Dumpling (c) Salmon Dumpling
( Blender ) | Flour Ground Salmon ( Steamer ) Salmon Dumpling
# (a) Beef Dumpling
# (b) Pork Dumpling
(c) Salmon Dumpling
24
B.10 LEVEL 9
Beef Pan > Cheese Bread ne CheeseBurger
(a) Cheese Burger (b) MaxJr (c) Hopper
eee Beet J | chopboard Pan ce a = via |
= = = | | aa (ee S wae 4
B.11 LEVEL 10 | 2309.09971#82 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 83 | eee Beet J | chopboard Pan ce a = via |
= = = | | aa (ee S wae 4
B.11 LEVEL 10
Rice | Pan Rice Cooker â|. 0: . Cooked Rice Tortilla Mixer | Burrito de Asada
Rice Rice Cooker Cooked Rice [ora | Mixer Burrito de Pastor
Rice Rice Rice | Rice Cooker Â¥ Pan Rice Cooker Rice Cooker â|. v 0: . Cooked Rice [ora | Cooked Rice Tortila Cooked Rice Tortilla Mixer â Mixer v | Burrito de Pastor Burrito de Polo. Burrito de Asada
Rice Â¥ Rice Cooker v Cooked Rice Tortila â v Burrito de Polo.
# (a) BurritodePastor
# (b) BurritodePollo
# (c) BurritodeAsada
25
B.12 LEVEL 11
Rice | Pan Rice Cooker | Cooked Rice Tortilla | 7 Mixer Burrito de Asada
(a) BurritodePastor (b) BurritodePollo (c) BurritodeAsada
Rice | Rice Cooker | Pan Cooked Rice Tortilla 7 Burrito de Pastor
Rice ¥. Pan Rice Cooker | Cooked Rice Tortilla | re ¥v Burrito de Pollo
Rice Chopboard ( Pot Salmon Sashimi Cooked Rice < Mixer Salmon Sushi | 2309.09971#83 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 84 | Rice Chopboard ( Pot Salmon Sashimi Cooked Rice < Mixer Salmon Sushi
a | Pot ) < Chopboard ) Tuna Sashimi Cooked Rice ( Mixer > Tuna Sushi
(d) SalmonSushi
(e) TunaSushi
B.13 LEVEL 12
Chopboard Potato Slice
Chopboard It Potato Slice Blender I Raw Smashed Potato i < Steamer » Smashed Potato
(a) Potato Salad (b) French Fries (c) Smashed Potato
Chopboard | Egg Potato Slice \ iN Mixer » / Potato Salad
26
# C MINECRAFT
Here we visualize the task graphs for different tasks in Minecraft.
â=D | @-e- nA t = â@i-)
(a) Cooking chicken in Minecraft
= df ia en" ee
(b) Cooking mutton in Minecraft
G@-3-â | -9- @ t C= . i â B oe
(c) Cooking steak in Minecraft
| => i) noe) Se Cx | @-2- 8 t [oc â@i-)
(d) Cooking porkchop in Minecraft
27
# D HUMAN EVALUATION INTERFACE | 2309.09971#84 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 85 | (d) Cooking porkchop in Minecraft
27
# D HUMAN EVALUATION INTERFACE
We use the human evaluation interface to test the humanâs perception of collaborative agents. This gives us a more controlled environment so usersâ perception of collaborative agents does not depend on their ability to control the keyboard and mouse, and their perception of collaborative agents does not depend on the latency and rate limits of GPT-4.
âesa on ta pep etn aan on a ped ht ed aw eh tine omen 5 nee econ tt tro) fmaa_Â¥) âSame oe (Caan sages) i -- |
level 3 emacs Current time step: 1 (max steps: 60) Current dishes: 1. slmonSushi remsinng time 25 To = âââ =a Dishes completed: Previous Actions: goto_agent0_storageO goto_agent!_storaged {goto agent2_storaged
(a) Welcom Screen for human evaluation
(b) Human Evaluation Example | 2309.09971#85 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
2309.09971 | 86 | (a) Welcom Screen for human evaluation
(b) Human Evaluation Example
level_3 eee] Current time step: 2 (max steps: 60) Current dishes: 1 tunaSushi remaining time: 24 Robot states Kitchen states pare areey oe = rs = ii Vege P| Dishes completed: Previous Actions: get_agent0_tuna_storage0 get_agent1_rice_storaged get_agent2_tuna_storage0 get_agent3_rice_storaged
Robot states
pare areey oe = rs =
SEES er ort pate roe
# (c) Human Evaluation Example
(d) Human Instructions
28 | 2309.09971#86 | MindAgent: Emergent Gaming Interaction | Large Language Models (LLMs) have the capacity of performing complex
scheduling in a multi-agent system and can coordinate these agents into
completing sophisticated tasks that require extensive collaboration. However,
despite the introduction of numerous gaming frameworks, the community has
insufficient benchmarks towards building general multi-agents collaboration
infrastructure that encompass both LLM and human-NPCs collaborations. In this
work, we propose a novel infrastructure - MindAgent - to evaluate planning and
coordination emergent capabilities for gaming interaction. In particular, our
infrastructure leverages existing gaming framework, to i) require understanding
of the coordinator for a multi-agent system, ii) collaborate with human players
via un-finetuned proper instructions, and iii) establish an in-context learning
on few-shot prompt with feedback. Furthermore, we introduce CUISINEWORLD, a new
gaming scenario and related benchmark that dispatch a multi-agent collaboration
efficiency and supervise multiple agents playing the game simultaneously. We
conduct comprehensive evaluations with new auto-metric CoS for calculating the
collaboration efficiency. Finally, our infrastructure can be deployed into
real-world gaming scenarios in a customized VR version of CUISINEWORLD and
adapted in existing broader Minecraft gaming domain. We hope our findings on
LLMs and the new infrastructure for general-purpose scheduling and coordination
can help shed light on how such skills can be obtained by learning from large
language corpora. | http://arxiv.org/pdf/2309.09971 | Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi Vo, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, Jianfeng Gao | cs.AI, cs.HC, cs.MA | The first three authors contributed equally. 28 pages | null | cs.AI | 20230918 | 20230919 | [
{
"id": "2307.04721"
},
{
"id": "2210.16257"
},
{
"id": "2307.02485"
},
{
"id": "2304.03347"
},
{
"id": "2010.03768"
},
{
"id": "2306.06070"
},
{
"id": "2308.11339"
},
{
"id": "2308.03688"
},
{
"id": "2212.14882"
},
{
"id": "2302.06100"
},
{
"id": "2302.01560"
},
{
"id": "1903.03094"
},
{
"id": "2305.16291"
},
{
"id": "2010.09890"
},
{
"id": "2303.05398"
},
{
"id": "1910.03655"
},
{
"id": "2209.07753"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2303.12712"
},
{
"id": "2109.01652"
},
{
"id": "2305.00970"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.