id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.16125#21 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 6 Visual Information Extraction. For constructing questions related to spatial understanding, we interpret the rich information in each image with texts using multiple pretrained models, so that ChatGPT/GPT-4 can understand the image and create questions accordingly. For constructing questions related to temporal understanding, considering that extracting reliable temporal information from videos (especially fine-grained actions and long-term temporal context) is extremely difficult given existing foundation models, we utilize the ground-truth annotations of video datasets. We will explore how to generate questions based on automatically extracted video information in the future. The extraction of visual information for images includes the following parts: | 2307.16125#20 | 2307.16125#22 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#22 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | â ¢ Image Captions. Image captions contain the overall description of an image. We employ BLIP2 [38] and Tag2Text [27] to create captions for each image. The former creates captions for the whole image while the latter generates captions based on descriptions of each instance. The two models complement each other to depict the image content within a single sentence. â ¢ Instance Descriptions. Besides captions which may ignore specific details in the image, we also extract visual information from images using instance-level descriptions, including object detection, attribute detection, and dense captions. Specifically, we use SAM [29] to segment each instance in the image and obtain their bounding boxes according to the segmentation results. The object labels are obtained using Tag2Text [27]. Besides, we also utilize attribute detector [30] to obtain the attributes of each instance in the image. Finally, we employ GRiT [28] to generate dense captions, which describe each detected instance in the image with a short sentence. These instance-level descriptions are complementary to the image captions, further enriching the visual information of each image. | 2307.16125#21 | 2307.16125#23 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#23 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | â ¢ Textual Elements. Besides objects, the texts in the image also contain important information describing the image. We employ PaddleOCR [31] for detecting textual elements. Question-Answer Generation. After extracting visual information from the image and video, we task ChatGPT/GPT-4 with generating multiple-choice questions based on the extracted information or video annotations. For each of the spatial understanding evaluation, we carefully design prompts and ask ChatGPT/GPT-4 to create multiple choice questions with four candidate options based on the extracted visual information. We create questions with ChatGPT for all evaluation dimensions, except for the reasoning dimension, where we use GPT-4 [2] due to its exceptional reasoning capability. For each question, we ask ChatGPT/GPT-4 to create four choices with one correct option and three distractors. We try to make the multiple-choice questions challenging by encouraging the three wrong choices to be similar to the correct one. The detailed prompts of generating multiple-choice questions for different evaluation dimensions are listed in Fig. 4. For generating questions related to temporal understanding, we utilize the ground-truth annotations of selected videos as the answer of multi-choice questions and employ ChatGPT to generate three distractors. | 2307.16125#22 | 2307.16125#24 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#24 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Automatic Filtering. Our benchmark aims at evaluating the multimodal vision-language understand- ing capability of MLLMs. However, we observe that some generated questions can be correctly answered by LLMs without seeing the image. We argue that such questions are not helpful to evaluate the visual comprehension capability of MLLMs. To this end, we feed the generated questions (without image) into three powerful LLMs, including Vicuna-7B [4], Flan-T5-XXL [1] and LLaMA-7B [5] and ask them to answer the questions. We empirically found that 5.52% of the generated questions can be correctly answered by all of the three LLMs. We filter out these questions from our benchmark. | 2307.16125#23 | 2307.16125#25 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#25 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Human Annotation. To ensure the accuracy and objectiveness of SEED-Bench, we further employ human annotators to verify the generated question/answer pairs. Human annotators are asked to choose the correct answer for each multiple-choice question and categorize each question into one of the evaluation dimension. If one question can not be answered based on the visual input or does not have any correct choice or has multiple correct choices, it will be discarded by human annotators. This results in a clean, high-quality and well-categorized benchmark for evaluation with a total of 19K multiple-choice questions. The statistics of the number of multiple-choice questions in each evaluation dimension is shown in Fig. 1. We can observe a minimum number of questions in text recognition with 85 samples, and a maximum number in instance localization with 4649 samples. We will maintain an even distribution among multiple-choice questions associated with different evaluation dimensions in the future. 7 | 2307.16125#24 | 2307.16125#26 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#26 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # Default Instruction: "You are an Al visual assistant that can analyze a single image. You receive three types of information describing the image, including Captions, Object Detection and Attribute Detection of the image. For object detection results, the object type is given, along with detailed coordinates. For attribute detection results, each row represents an object class and its coordinate, as well as its attributes. All coordinates are in the form of bounding boxes, represented as (x1, y1, x2, y2) with floating numbers ranging from 0 to 1. These values correspond to the top left x, top left y, bottom right x, and bottom right y. | 2307.16125#25 | 2307.16125#27 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#27 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Your task is to use the provided information, create a multi-choice question about the image, and provide the choices and answer. Instead of directly mentioning the bounding box coordinates, utilize this data to explain the scene using natural language. Include details like object counts, position of the objects, relative position between the objects. When using the information from the caption and coordinates, directly explain the scene, and do not mention that the information source is the caption or the bounding box. Always answer as if you are directly looking at the image. | 2307.16125#26 | 2307.16125#28 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#28 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Create several questions, each with 4 choices. Make the question challenging by not including the visual content details in the question so that the user needs to reason about that first. Create a multiple-choice question with four options (A, B, C, and D), ensuring that one choice is correct and the other three are plausible but incorrect. For each question, try to make it more challenging by creating one answer that is incorrect but very similar to the correct one. Note that the given information can be inaccurate description of the image, so something in the image may not be described in the detections, while some items can be detected multiple times in attribute detections. Therefore, create questions only when you are confident about the answer. | 2307.16125#27 | 2307.16125#29 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#29 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Don't explain your choice." # Scene Understanding Instruction: "Create complex questions about the major content of the image. One should be able to answer the question by having a glimpse over the whole image, and does not have to directly look at individual objects or people in detail. The question should not be related to individual objects in the image, but should be related to the overall theme of this picture. " # Instance Identity Instruction: "Create complex questions about the identity of objects appeared in the image, such as its type/class or its existence. For example, you may ask "What an object is?" or "Does some object appear in the image?". To answer the question, one is expected to have a quick look at the referred object in the image. â # Instance Attribute Instruction: "Create complex questions about the attribute of a certain object, such as its color, shape or fine-grained type. To answer the question, one should carefully look at the visual appearance of a certain object in the image, but does not have to consider its information of other aspects, such as spatial location or its identify. " # Instance Localization Instruction: "Create complex questions about the location of a certain object in the image. The question should be created based on the coordinates of the objects. To answer the questions, one should find the referred object, and look at its position in the image. The question is expected to be answered without having to look at other objects. " # Instance Counting Instruction: "Create questions that involve the number of appearance of a certain object. Start with "How many ....". The choices of the question should be numbers. To answer the question, one should find and count all of the mentioned objects in the image. " # Spatial Relation Instruction: "Create questions about spatial relations between two objects. The questions should be mainly based on the coordinates of the two objects. To answer the questions, one should find the two mentioned objects, and find their relative spatial relation to answer the question. " # Instance Interaction Instruction: "Create questions about the relations and connections between two objects, such as "What a person is doing to an object" and "What is the relation between two objects". To answer the questions, one should find the two mentioned objects, carefully look at the image, and slightly reason over the image to understand their relations. " # Visual Reasoning Instruction: "Create complex questions beyond describing the scene. | 2307.16125#28 | 2307.16125#30 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#30 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | To answer such questions, one should first understanding the visual content, then based on the background knowledge or reasoning, either explain why the things are happening that way, or provide guides and help to user's request. Make the question challenging by not including the visual content details in the question so that the user needs to reason about that first. " # Text Recognition Instruction: "Create questions that is related to the texts in the image. Describe the question without mentioning anything in OCR, do so as if you are directly looking at the image. " Figure 4: Prompts of generating multiple-choice questions for different evaluation dimensions. 8 / Table 3: Evaluation results of different models on SEED-Bench, where â | 2307.16125#29 | 2307.16125#31 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#31 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Spatialâ shows the averaged performance on nine dimensions for evaluating spatial understanding, and â Temporalâ shows the averaged performance on three dimensions for evaluating temporal understanding. Model Language Model Spatial Temporal Overall Acc Rank Acc Rank Acc Rank LLM Flan-T5 [1] Vicuna [4] LLaMA [5] Flan-T5-XL Vicuna-7B LLaMA-7B 27.32 28.16 26.56 17 16 18 28.56 29.46 27.27 11 8 13 27.65 28.50 26.75 17 16 18 ImageLLM BLIP-2 [6] InstructBLIP [10] InstructBLIP Vicuna [10] LLaVA [8] MiniGPT-4 [7] VPGTrans [40] MultiModal-GPT [12] Otter [11] OpenFlamingo [41] LLaMA-Adapter V2 [42] GVT [33] mPLUG-Owl [9] Flan-T5-XL Flan-T5-XL Vicuna-7B LLaMA-7B Flan-T5-XL LLaMA-7B LLaMA-7B LLaMA-7B LLaMA-7B LLaMA-7B Vicuna-7B LLaMA-7B 49.74 57.80 58.76 36.96 47.40 41.81 34.54 35.16 34.51 35.19 35.49 37.88 3 2 1 8 4 5 12 11 13 10 9 7 36.71 38.31 38.05 23.75 29.89 31.40 29.21 30.35 29.25 25.75 27.77 23.02 3 1 2 16 7 5 10 6 9 14 12 18 46.35 52.73 53.37 33.52 42.84 39.10 33.15 33.91 33.14 32.73 33.48 34.01 3 2 1 9 4 5 11 8 12 13 10 7 VideoLLM VideoChat [15] Video-ChatGPT [16] Valley [17] Vicuna-7B LLaMA-7B LLaMA-13B 39.02 33.88 32.04 6 14 15 33.68 23.46 25.41 4 17 15 37.63 31.17 30.32 6 14 15 | 2307.16125#30 | 2307.16125#32 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#32 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # 3.4 Evaluation Strategy Different from MMBench [26] that employs ChatGPT to match a modelâ s prediction to one of the choices in a multiple-choice question (achieves only 87.0% alignment rate), we adopt the answer ranking strategy [10, 32, 39] for evaluating existing MLLMs with multiple-choice questions. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the content of this choice given the question. We select the choice with the highest likelihood as modelâ s prediction. Our evaluation strategy does not rely on the instruction-following capabilities of models to output â Aâ or â Bâ or â Câ or â Dâ . | 2307.16125#31 | 2307.16125#33 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#33 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Furthermore, this evaluation strategy eliminates the impact of the order of multiple-choice options on the modelâ s performance. # 4 Evaluation Results # 4.1 Models Based on our SEED-Bench, we evaluate 18 models including 3 LLMs, i.e., Flan-T5 [1], Vi- cuna [4], LLaMA [5], 12 ImageLLMs, i.e., OpenFlamingo [41], BLIP-2 [6], MiniGPT-4 [7], LLaVa [8], mPLUG-Owl [9], InstructBLIP [10], Otter [11], MultimodalGPT [12], GVT [33], PandaGPT [13], VPGTrans [40], LLaMA-Adapter V2 [42], and 3 VideoLLMs, i.e., VideoChat [15], Video-ChatGPT [16] and Valley [17]. Each model is evaluated with all the 12 dimensions including both the spatial and temporal understanding. For ImageLLMs, besides the evaluation of spatial understanding, we aim to investigate their capability to perform temporal reasoning among multiple frames. For VideoLLMs, we seek to explore whether their spatial understanding abilities have degraded by taking a single image as the input. | 2307.16125#32 | 2307.16125#34 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#34 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # 4.2 Results The evaluation results of different models on SEED-Bench are listed in Table. 1, where the accuracy refers to the proportion of correctly answered multiple-choice questions relative to the total number of questions. We are surprised to observe that InstructBLIP [10] not only achieves the best performance based on the averaged results across nine dimensions for evaluating spatial understanding, but also surpasses VideoLLMs in terms of the averaged results across three dimensions for evaluating temporal understanding. We display leaderboards of various evaluation dimensions on SEED-Bench in Fig. 5 to provide a comprehensive assessment of different models. The overall leaderboard based on the | 2307.16125#33 | 2307.16125#35 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#35 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 9 Wana 301 Darinencm | ors | ener | 0a a ter a [ee ae | ve) | ee oe eee Video catger | 283 arts [00] oe ae [awa | zea | | tama |e | (1) Scene Understanding (2) Instance Identity (3) Instance Attributes (4) Instance Location en 36.17 | § | videochat | 42.27 3272 | | 7 | UaMarAdapterv2 | 39.18 | eo 33.71 a ss7_| | 8 | fants | 3.98 a 32.82 ow | 8 | werens | sis6_| | 9 | Opentiamingo | 50.25 _| ves | | otter | att peeves 29.67 31.75 uae | as | 27.34 se 30.75 27.30 = nrweon | 27.26 | | 14 | MultiModal-cpr_| 30.14 es otter | 2528 | | 8 | valley | 30.14 uama [2507 | | 16 | video-cnatGer | 2953 | tee 7 | ama | 2877 8 | vicuna | 28.57_| is count Instance Counting (6) Spatial Relations (7) Instance Interaction (8) Visual Reasoning ~wlihodsicer | 3680 3 ew ~~ Wideschat | 3889 omer ara) 7s rstuctBu Viena | 3448 | a [ee Jen P| eerie | t | sits? laMArAdapterv2_|_2a71_| OpenFlamingo | 20.00 | InstructBLIP 33.10 toma | 3209 | ve a a al eins | a aus] [sae omaet ae Wana | 2730 5 valley | mo a [waadaperv | 9.65 | tow tants | a6 | Se [uve | as | (9) Text Recognition (10) Action Recognition (11) Action Prediction (12) Procedure Understanding Figure 5: Leaderboards of different evaluation dimensions on SEED-Bench. 10 # Model Flan-TS - Vicuna â | 2307.16125#34 | 2307.16125#36 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#36 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | LaMA - BLIP-2 InstructBLIP hl InstructBLIP Vicuna ava Minicer-4 vPGTrans MultiModal-GPT Otter = OpenFlaming LLaMA-Adapter V2 ons mPLUG-Owl VideoChat a Video-ChatGPT â ' 1 ene Instance Instance Instance Instance Spatial__â Instance_â _â Visual Text Action Action Procedure set Understanding Identity Attributes Localization Counting Relations Interaction Reasoning Recognition Recognition Prediction Understanding Evaluation Dimension Figure 6: Illustration of each modelâ s performance across different evaluation dimensions, where darker colors represent higher ranks. averaged results across all the evaluation dimensions are shown in Fig. 1. To better showcase the the capabilities of models across different evaluation dimensions, we further visualize the ranking of each model within each evaluation dimension in Fig. 6, where darker colors represent higher ranks. We can observe that the BLIP series [6, 10] model achieves competitive results in multiple evaluation dimensions, but they are not good at visual reasoning and action recognition. VideoLLM Valley [17] achieves suboptimal performance in the majority of evaluation dimensions. LLaVa [8] exhibits unparalleled capabilities in the evaluation of text recognition compared to other evaluation dimensions. In terms of specific evaluation dimension, MiniGPT-4 [7] model and mPLUG-Owl [9] model performs better in visual reasoning, while VPGTrans [40] model excels in action recognition and procedure understanding. LLaMA Adapter V2 [42] model shows more proficiency in action recognition. | 2307.16125#35 | 2307.16125#37 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#37 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Whatâ s more, Multimodal GPT [12], Otter [11], Openflamingo [41], GVT [33], and the three VideoLLMs [15, 16, 17] exhibit balanced strength across various evaluation dimensions. # 4.3 Analysis Through the comprehension and objective evaluation of various models on SEED-Bench, we have observed a number of findings that can bring insights for future work. Most MLLMs still exhibit limited performance across all 12 evaluation dimensions. As shown in Fig. 1, 5, most MLLMs (except BLIP series models) can not reach 50% accuracy on both average performance and the performance on more than three single evaluation dimension. In some specific evaluation dimension (e.g., visual reasoning), it seems that most MLLMs achieve high accuracy. However, when comparing the performance of MLLMs to LLMs, we observe that the performance improvement of most MLLMs is still relatively limited. MLLMs achieve relatively high performance on global image comprehension On the evaluation of scene understanding and visual reasoning, the accuracy of most MLLMs is higher than 40%, and all MLLMs outperforms LLMs. This shows that MLLMs are more proficient in global understanding and reasoning of images, compared with other evaluation dimensions that require fine-grained instance-level comprehension. InstructBLIP achieves top performance on 8 of 12 evaluation dimensions. We can observe that InstructBLIP outperforms other models on 8 evaluation dimensions and the possible explanations for this superior performance are as follows. (a) The instruction-tuning data of InstructBLIP contains totally 16M samples (larger than other instruction-tuning datasets), and covers a wide range of multi- modal tasks, even including QA data of OCR and temporal visual reasoning. (b) The weights of LLMs are frozen when performing instruction-tuning of InstructBLIP, which may alleviate catastrophic forgetting. However, InstructBLIP series models still perform poorly on action recognition and | 2307.16125#36 | 2307.16125#38 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#38 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 11 # High Rank # Low Rank procedure understanding that differ significantly from the instruction-tuning data. For instance, on action recognition that requires the understanding of fine-grained actions in Something-Something-v2, InstructBLIP series models can not achieve significant performance gain compared to LLMs (i.e., lower than 2%). This indicates that InstructBLIP series models may fail to generalize well on the out-of-distribution data. MLLMs show weaker abilities in understanding spatial relationships between objects. The top-ranked model InstructBLIP only achieves 40% accuracy on the evaluation of spatial relations, which shows that recognizing relative spatial relationships between instances is challenging because there can be many possible arrangements and combinations of spatial relationships between instances. Additionally, spatial relationships between objects may cause ambiguity in some cases, making it difficult to determine their relationship. Most MLLMs show poor performance for text recognition. Apart from InstructBLIP, all other models achieve an accuracy lower than 40% for text recognition due to the lack of textual elements in multimodal pre-training datasets. Since the ability to accurately identify and extract text from images is important, future work should develop models that are better equipped to handle text recognition by pre-training on datasets with rich textual elements in visual data. VideoLLMs achieve promising results on spatial understanding. For example, VideoChat achieves 39.98% accuracy (ranking 4-th on instance localization, surpassing LLaVa by 11.55% and performing only 3.58% lower than the top-1 model. It shows that VideoChatâ s ability of spatial understanding does not degrade by jointly training on both image and video data during the pre-training and instruction-tuning stages. Most MLLMs exhibit unsatisfactory performance on fine-grained temporal understanding. It is notable that on the evaluation of procedure understanding, the top-ranked model, VPGTrans, achieves an accuracy that is only 5% higher than that of LLaMA. The performance improvement of the following 4 MLLMs is even less than 1.2% compared with LLaMA. This demonstrates that it is extremely difficult for both the ImageLLMs and VideoLLMs to perform fine-grained temporal reasoning so that they can recognize and sort the key actions in a video. VideoLLMs fail to achieve competitive performance on temporal understanding. | 2307.16125#37 | 2307.16125#39 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#39 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Although VideoLLMs are instruction-tuned on video data, they do not exhibit a significant advantage on evaluation dimensions for temporal understanding. Surprisingly, two VideoLLMS (Video-ChatGPT and Valley) even perform worse than most ImageLLMs on action recognition, action prediction and procedure understanding. It indicates that the capabilities of existing VideoLLMs for fine-grained action recognition, temporal relationship understanding and temporal reasoning are still limited. Similar concerns about existing VideoLLMs are also presented in recent works [15, 16]. # 5 Conclusion In this work, we propose a large-scale benchmark SEED-Bench to provide a comprehensive and objective evaluation of Multimodal Large Language Models (MLLMs) on generative comprehension. SEED-Bench consists of 19K multiple-choice questions with accurate human annotations, which covers 12 evaluation dimensions for both the spatial and temporal understanding. We design an advanced pipeline to create multiple-choice questions that target specific evaluation dimensions, facilitating the scalability of evaluation data across a variety of domains. We also integrate automatic filtering and manual verification to improve the quality of the generated questions and answers. We conduct a thorough evaluation of 18 models, analyzing and comparing their performances to provide insights for future research. We plan to launch and consistently maintain a leaderboard, offering a platform for the community to assess model performance. We will continue to further broadening the evaluation dimensions of SEED-Bench with more data. | 2307.16125#38 | 2307.16125#40 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#40 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # Acknowledgements We sincerely acknowledge Junting Pan (CUHK MMLab) for the insightful suggestions, Zhan Tong (Nanjing University) for the data processing, and Yi Chen (Tencent AI Lab) for the engaging discussions. 12 # References [1] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. [2] OpenAI. Gpt-4 technical report, 2023. | 2307.16125#39 | 2307.16125#41 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#41 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | # AaB BON [3] OpenAI. Introducing chatgpt. https://openai.com/blog/chatgpt, 2022. [4] FastChat. Vicuna. https://github.com/lm-sys/FastChat, 2023. [5] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: | 2307.16125#40 | 2307.16125#42 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#42 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [6] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. ICML, 2023. [7] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. | 2307.16125#41 | 2307.16125#43 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#43 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. [8] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. [9] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. [10] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. | 2307.16125#42 | 2307.16125#44 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#44 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023. [11] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. [12] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023. [13] Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to instruction- follow them all. arXiv preprint arXiv:2305.16355, 2023. [14] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. | 2307.16125#43 | 2307.16125#45 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#45 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023. [15] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023. | 2307.16125#44 | 2307.16125#46 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#46 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | [16] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. [17] Ruipu Luo, Ziwang Zhao, Min Yang, Junwei Dong, Minghui Qiu, Pengcheng Lu, Tao Wang, and Zhongyu Wei. Valley: Video assistant with large language model enhanced ability. arXiv preprint arXiv:2306.07207, 2023. [18] Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, and Ying Shan. Planting a seed of vision in large language model. arXiv preprint arXiv:2307.08041, 2023. [19] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023. [20] Yu Lili, Shi Bowen, Pasunuru Ram, Miller Benjamin, Golovneva Olga, Wang Tianlu, Babu Arun, Tang Binh, Karrer Brian, Sheynin Shelly, Ross Candace, Polyak Adam, Howes Russ, Sharma Vasu, Xu Jacob, Singer Uriel, Li (AI) Daniel, Ghosh Gargi, Taigman Yaniv, Fazel-Zarandi Maryam, Celikyilmaz Asli, Zettlemoyer Luke, and Aghajanyan Armen. Scaling autoregressive multi-modal models: Pretraining and instruction tuning. 2023. [21] Jing Yu Koh, Daniel Fried, and Ruslan Salakhutdinov. Generating images with multimodal language models. arXiv preprint arXiv:2305.17216, 2023. [22] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. | 2307.16125#45 | 2307.16125#47 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#47 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904â 6913, 2017. 13 [23] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. [24] Zhenfei Yin, Jiong Wang, Jianjian Cao, Zhelun Shi, Dingning Liu, Mukai Li, Lu Sheng, Lei Bai, Xiaoshui Huang, Zhiyong Wang, et al. | 2307.16125#46 | 2307.16125#48 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#48 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Lamm: Language-assisted multi-modal instruction-tuning dataset, framework, and benchmark. arXiv preprint arXiv:2306.06687, 2023. [25] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023. [26] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023. [27] Xinyu Huang, Youcai Zhang, Jinyu Ma, Weiwei Tian, Rui Feng, Yuejie Zhang, Yaqian Li, Yandong Guo, and Lei Zhang. | 2307.16125#47 | 2307.16125#49 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#49 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Tag2text: Guiding vision-language model via image tagging. arXiv preprint arXiv:2303.05657, 2023. [28] Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280, 2022. | 2307.16125#48 | 2307.16125#50 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#50 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | [29] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything. arXiv:2304.02643, 2023. [30] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: | 2307.16125#49 | 2307.16125#51 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#51 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Revisiting visual representations in vision-language models. In CVPR, 2021. [31] https://github.com/PaddlePaddle/PaddleOCR. Paddleocr. [32] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â | 2307.16125#50 | 2307.16125#52 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#52 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | 1901, 2020. [33] Guangzhi Wang, Yixiao Ge, Xiaohan Ding, Mohan Kankanhalli, and Ying Shan. What makes for good visual tokenizers for large language models? arXiv preprint arXiv:2305.12223, 2023. [34] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018. [35] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The" something something" video database for learning and evaluating visual common sense. In ICCV, 2017. [36] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Evangelos Kazakos, Jian Ma, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. Rescaling egocentric vision. arXiv preprint arXiv:2006.13256, 2020. [37] Hilde Kuehne, Ali Arslan, and Thomas Serre. | 2307.16125#51 | 2307.16125#53 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#53 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | The language of actions: Recovering the syntax and semantics of goal-directed human activities. In CVPR, 2014. [38] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, 2022. [39] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021. | 2307.16125#52 | 2307.16125#54 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#54 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | [40] Ao Zhang, Hao Fei, Yuan Yao, Wei Ji, Li Li, Zhiyuan Liu, and Tat-Seng Chua. Transfer visual prompt generator across llms. abs/23045.01278, 2023. [41] ml_foundations. Openflamingo. https://github.com/mlfoundations/open_flamingo, 2023. [42] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. | 2307.16125#53 | 2307.16125#55 | 2307.16125 | [
"2306.05424"
]
|
2307.16125#55 | SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension | Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. 14 | 2307.16125#54 | 2307.16125 | [
"2306.05424"
]
|
|
2307.15833#0 | Dialogue Shaping: Empowering Agents through NPC Interaction | 3 2 0 2 l u J 8 2 ] L C . s c [ 1 v 3 3 8 5 1 . 7 0 3 2 : v i X r a # Dialogue Shaping: Empowering Agents through NPC Interaction # Wei Zhou, Xiangyu Peng and Mark Riedl Georgia Institute of Technology, Atlanta, GA, 30332, USA Abstract One major challenge in reinforcement learning (RL) is the large amount of steps for the RL agent needs to converge in the training process and learn the optimal policy, especially in text-based game environments where the action space is extensive. However, non-player characters (NPCs) sometimes hold some key information about the game, which can potentially help to train RL agents faster. Thus, this paper explores how to interact and converse with NPC agents to get the key information using large language models (LLMs), as well as incorporate this information to speed up RL agentâ s training using knowledge graphs (KGs) and Story Shaping. # Keywords Large Language Model, ChatGPT, Reinforcement Learning, Knowledge Graph, Text adventure game Large Language Model, ChatGPT, Reinforcement Learning, Knowledge Graph, Text adventure game # 1. Introduction | 2307.15833#1 | 2307.15833 | [
"2301.10107"
]
|
|
2307.15833#1 | Dialogue Shaping: Empowering Agents through NPC Interaction | Reinforcement learning (RL) has demonstrated remark- able effectiveness in solving intricate decision-making tasks, but its trial-and-error approach often leads to slow convergence to the optimal policy. In text-adventure games, NPCs possess crucial information that could spare the agent from extensive trial-and-error. Utilizing this prior knowledge could significantly reduce the agentâ s policy search space, making it more efficient by breaking down complex tasks into smaller, focused objectives. For instance, knowing that "killing the dragon" requires a sword allows the agent to concentrate on finding the sword directly, rather than wasting steps exploring how to defeat the dragon. Large Language Models (LLMs) are incredibly capable of conversational tasks and are highly configurable using prompting techniques. Thus, we chose to use them as the dialogue module responsible for talking to the NPC. Meanwhile, they are not as efficient as RL agent in terms of searching for the optimal chain of actions. Therefore, we chose to keep the RL agent as the main component responsible for searching for the optimal policy while speeding its search using dialogue module that is com- prised of LLMs. The RL agent acts as an action module and the LLMs act as a dialogue module. Yet, we still need to find a way to bridge these two modules, i.e. incorporating the information that the dialogue module retrieves into the action module. For this purpose, we turn to the technique AIIDE-23: The 19th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, October 08â 12, 2023, Salt Lake City, UT $ [email protected] (W. Zhou); [email protected] (X. Peng); [email protected] (M. R. ) (ENON) # © 2023 Author:Pleasefillinthe\copyrightclause macro CEUR Workshop Proceedings (CEUR-WS.org) CEUR Workshop Proceedings (CEUR-WS.org) | 2307.15833#0 | 2307.15833#2 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#2 | Dialogue Shaping: Empowering Agents through NPC Interaction | of Story Shaping[1], which is able to guide the action module to imitate the optimal trajectory. In this paper, we propose Dialogue Shaping, a frame- work that is able to extract useful information through conversation with NPCs, and then convert the informa- tion into knowledge graphs which are then used to speed up RL agentâ s convergence to optimal policy by using the Story Shaping technique[1]. # 2. Background and Related Work Reinforcement Learning in Text Games Text games involve turn-based interactions where players read de- scriptions of the gameâ s environment in natural language and respond with short text-based actions. These games can be described using partially-observable Markov De- cision Processes, denoted as â ¨ð , ð , ð ´, ð , â ¦, ð , ð ¾â ©, rep- resenting possible states, transition probabilities, vocab- ulary for commands, observation probabilities, reward function, and discount factor. The RL agentâ s goal is to learn a policy ð (ð ) â ð | 2307.15833#1 | 2307.15833#3 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#3 | Dialogue Shaping: Empowering Agents through NPC Interaction | to maximize expected future rewards. Large Language Models in RL The power of Large Language Models (LLMs) has gained significant attention in recent years due to their advanced ability to adapt to numerous downstream tasks. ChatGPT, an LLM chatbot created by OpenAI, offers diverse interaction modes, and users can engage with it by providing prompts for acting as the NPC and the agent in text games [2]. Recent stud- ies also explored the integration of large language mod- els into reinforcement learning frameworks to enhance the capabilities of agents. Contextual Action Language Model (CALM) [3] used LLM to generate a set of concise candidate actions at each step of the game for the rein- forcement learning agent, thereby greatly reducing the | 2307.15833#2 | 2307.15833#4 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#4 | Dialogue Shaping: Empowering Agents through NPC Interaction | 1 Wei Zhou et al. CEUR Workshop Proceedings Figure 1: LIGHT Game Map for Game 1 action space of the RL agent. In contrast, we utilize Large Language Models in conversational settings to extract useful game information and incorporate them into the game as extra reward signals to guide the RL agent. # 3. Preliminaries 3.1. Text Games We create three text games in the LIGHT environment[4], which is a large-scale crowdsourced text adventure game framework, in which agents can both perceive, emote and act. The LIGHT environment also provides a database of rooms, characters, and objects, from which we can build our custom games. The visualization of one of the games we created and used in the experiments can be found in Figure 1. 3.2. Knowledge Graph triples A â ¨ð ð ¢ð ð ð ð ð ¡, ð ð ð ð ð ¡ð ð ð , ð ð ð ð ð ð ¡â ©, informa- tion about entities, their attributes, and relationships. | 2307.15833#3 | 2307.15833#5 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#5 | Dialogue Shaping: Empowering Agents through NPC Interaction | Our method uses two types of KGs: internal KG and target KG. During RL exploration in the text game, the internal KG represents the agentâ s current state, including the room itâ s in and the objects it possesses[5, 6, 7, 8, 9, 10]. We update this KG at each step based on changes in the game environment (e.g., moving to a new room) or the agentâ s actions (e.g., acquiring objects). The target KG describes the final state the agent must achieve to win the game, specifying the last room the agent should be in and the required objects. This KG is generated before training and stays unchanged. 3.3. KGA2C agent KGA2C [7] is used for our game-playing agent for both baseline and Story Shaping[1]. It is an RL agent that combines both Advantage Actor Critic methods[11] and KG guidance to enhance its learning and decision-making capabilities. The input embedding to the KGA2C agent is a concatenation of encoding of the agentâ s current internal KG and four observation tensors, including the description of the current room the agent is located in, the agentâ s current inventory, feedback of the environment from the agentâ s last action, and agentâ s last action. 3.4. Story Shaping Story Shaping, proposed by Peng et al., is a technique that helps the RL agent infers tacit knowledge on how to accomplish a task. For each training step in the game, Story Shaping gives the RL agent an extra reward signal (in addition to the game environmentâ s reward signal) based on the similarity between agentâ s current internal KG and target KG, and therefore encourage the agent to perform actions that will make its internal KG similar to the target KG. The target KG in this paper is generated by prompting the ChatGPT agent and it represents a summary of the hints the ChatGPT agent learns through talking to the ChatGPT NPC. # 4. Information Retrieval from Dialogue In order to retrieve correct and important information about the game from NPC, it is expected to know the game setting and it should be able to provide the correct response every time it is asked by the agent. In other words, the NPC should act as a "database" of the game. 4.1. | 2307.15833#4 | 2307.15833#6 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#6 | Dialogue Shaping: Empowering Agents through NPC Interaction | NPC Prompting We open one session of ChatGPT, as shown in Figure 2, and prompted it to be the non-player character. The NPC is provided with general information about the game, including the layout and the available objects, as well as the hints to win the game. One example of hints is getting a sword in the Artillery room is a prerequisite to kill the dragon. 4.2. Agent Prompting ChatGPT is prompted to be the player agent in the game. The ChatGPT agent is provided with its goal in the game (e.g. kill the dragon) and general instructions on how to converse with the NPC (e.g. ask questions based on previous given answers). We did not reveal any game details in the prompts for the ChatGPT agent, because it is expected to gain those information by asking questions to the ChatGPT NPC. | 2307.15833#5 | 2307.15833#7 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#7 | Dialogue Shaping: Empowering Agents through NPC Interaction | 2 1â 7 Wei Zhou et al. CEUR Workshop Proceedings Figure 2: Screenshot of the starting conversation between the user and the ChatGPT NPC. The question asked is generated by the ChatGPT agent and copied by the user. 4.3. Story Shaping from Dialogue After the dialogue with NPC, we train a KGA2C agent to play the game. In order to incorporate the information learned by the ChatGPT agent during conversation with NPC into KGA2C agentâ s training, we prompt ChatGPT agent to generate a knowledge graph and use it as a target knowledge graph for the Story Shaping KGA2C agent. The pipeline for our method is shown in Figure 3. | 2307.15833#6 | 2307.15833#8 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#8 | Dialogue Shaping: Empowering Agents through NPC Interaction | # 5. Experiments We conducted experiments to test our assumption that the dialogue is able to accelerate KGA2C agentâ s conver- gence by generating the target knowledge graph. # 5.1. ChatGPT Prompting for Agent vs NPC Dialogue Table 1 An example prompt used to trigger ChatGPT to act as NPC in game 1. Goal and prerequisites are different for each game. You are an NPC in a text-adventure game. You and the agent are both in the game. For each step, waits for the agent to ask questions, then you should provide a correct answer based on the information about the game given as follow: Layout: [room1]-east-[room2], [room2]-east-[room3], etc. (A-east-B means A is to the east of B) Goal and prerequisite: A dragon is in the dungeon. The only way to kill the dragon is to use a sword and there is no other way. Object information: [object1], [object2] is in [room1]. [object3], [object4], [object5] is in [room2]. [room3] has no objects... | 2307.15833#7 | 2307.15833#9 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#9 | Dialogue Shaping: Empowering Agents through NPC Interaction | We open one session of ChatGPT and prompt it to be the NPC of the game. Table 1 shows the NPC prompt 3 Table 2 An example prompt used to trigger ChatGPT to act as the agent in game 1. Goal is different for each game. You are an agent in a text-adventure game. You and the NPC are both in the game. Your goal is to kill the dragon. For each step, you should ask questions to the NPC in order to get the information on how to kill the dragon. Ask a new set of questions based on the current obser- vation and answers given to the previous set of ques- tions according to the following rule: 1. ask similar and follow-up questions to previous questions that have a "yes" answer. 2. Avoid asking similar and follow-up questions to previous questions that have a "no" answer. Table 3 Examples of an efficient dialogue and an inefficient dialogue between ChatGPT agent and ChatGPT NPC. Desired Dialogue Agent: | 2307.15833#8 | 2307.15833#10 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#10 | Dialogue Shaping: Empowering Agents through NPC Interaction | Do I need an object to kill the dragon? Can the dragon be lured out? Can the dragon be defeated without using a weapon? NPC: Yes. No. No. Agent: What object I should get to kill the dragon? NPC: A sword. Agent: Where can I find the sword? NPC: The Artillery room. Dialogue ends. Enough information is gathered. Undesired Dialogue Agent: Do I need an object to kill the dragon? Can the dragon be lured out? Can the dragon be defeated without using a weapon? NPC: Yes. No. No. | 2307.15833#9 | 2307.15833#11 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#11 | Dialogue Shaping: Empowering Agents through NPC Interaction | Agent: Can the dragon be trapped or immobilized tem- porarily? Is there any way to communicate with the dragon or understand its intentions? NPC: No. No. Agent: Is there any information about the dragonâ s ori- gin or history? Can the dragon be reasoned with or persuaded to leave peacefully? NPC: No. No. â â â for one game. We also open another separate session of ChatGPT and prompt it to be the agent of the game with a goal in mind. Table 2 shows the agent prompt for one game. Then, the dialogue begins as the agent comes up with a set of questions and the NPC provides answers to them back and forth. ChatGPT NPC proves to be a reliable game database, correctly responding to queries about room and object locations. Moreover, when the ChatGPT agent makes ungrounded assumptions about the game (like "Does the barkeeper possess any knowledge about dragonâ s weaknesses" while there is no barkeeper) in its questions, the ChatGPT NPC is able to recognize (finding 1â 7 Wei Zhou et al. CEUR Workshop Proceedings | 2307.15833#10 | 2307.15833#12 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#12 | Dialogue Shaping: Empowering Agents through NPC Interaction | Dialogue gent | Do need an cbect lhe dragon? Can he skagon be hres ot? â Can the cragcn be defeated without using a weaponâ ? NPC â Knewiedge Graph Generation Agont What object should get il the dragon? User Pc a Wnete can | tna te sword? ] â Output a textual rowed graph that contains tho game information required to reach the goal. Outut tin te format of tps ean, rection o° relation, entty2>) For exampe, <you have, rugS>, <0Â¥n â | 2307.15833#11 | 2307.15833#13 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#13 | Dialogue Shaping: Empowering Agents through NPC Interaction | ante, west the bare â a, ty ome S| â Target Krowleige Graph Story Shaping KGAZC agent â Agent Figure 3: Dialogue Shaping pipeline containing ChatGPT NPC dialogue with ChatGPT agent and target knowledge graph extraction. This target knowledge graph is used by the KGA2C agent with Story Shaping assistance to get additional reward signals. Table 4 The prompt used to trigger the ChatGPT agent to generate the target knowledge graph, which will later be used in the training of KGA2C agent with Story Shaping. Output a textual knowledge graph that contains the game information required to reach the goal. Output it in the format of edges (entity1 â â direction or verbsâ entity2). For example, youâ â haveâ rugs, town center â â westâ the bar | 2307.15833#12 | 2307.15833#14 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#14 | Dialogue Shaping: Empowering Agents through NPC Interaction | knowledge graph to "shape" the Story Shaping KGA2C agent behaviors. We generate each game using the LIGHT framework [4]. We design each game such that the RL agent will only get one reward signal of 15 when it wins the game. For every game, the KGA2C agent is trained for 100,000 steps. After every 450 steps, the agent is evaluated for 50 episodes with 10 random seeds. We gather metrics like average and standard deviation of the test scores achieved for those 50 episodes, like in Figure 4. The maximum step limit for a single episode is 75 steps, while the optimal path for all games usually takes around 10 steps. | 2307.15833#13 | 2307.15833#15 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#15 | Dialogue Shaping: Empowering Agents through NPC Interaction | out that the game does not have a barkeeper) and negate them. In evaluating the performance of ChatGPT agent, we aim to minimize the number of exchanges with the Chat- GPT NPC while retrieving hints on winning the game. We found out that ChatGPT agent is much more likely to hallucinate by coming up with ungrounded questions without explicit instructions on how to ask the optimal questions in our prompt. As shown in the desired dia- logue in Table 3, when we include those explicit instruc- tions in the prompt, it is able to ground its inquiries. Oth- erwise, it will fail to follow up on the previous questions that have a "yes" answer and endlessly ask ungrounded questions as shown in the undesired dialogue in Table 3. # 5.2. KGA2C Agent Training with Dialogue Shaping After the dialogue ends and the ChatGPT agent retrieved information on how to reach the goal, we prompt it to convert that information into a textual knowledge graph representation as shown in Table 4. We then filter the edges in the knowledge graph by only including ones that have "you" as a subject, because we are only interested in what actions the agent has to perform to reach to goal. Finally, we use this filtered knowledge graph as the target We trained the baseline KGA2C agent and the one with Story Shaping assistance for each game. Baseline KGA2C agent only receives reward signals that are built into the game mechanism (i.e. reaching the final goal), whereas the Story Shaping KGA2C agent receives additional re- ward signals when its internal knowledge graph overlaps with the target knowledge graph which is generated by the dialogue module (i.e. complete the prerequisite of the goal). | 2307.15833#14 | 2307.15833#16 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#16 | Dialogue Shaping: Empowering Agents through NPC Interaction | 5.3. Results Figure 4 showed the average test score and its standard deviation of the baseline KGA2C agent and Story Shap- ing KGA2C agent equipped with target knowledge graph generated from the dialogue during training for game 1. The Story Shaping KGA2C agent outperformed the base- line in all games. In all games, the Story Shaping agent converged to the optimal policy (gaining maximum score of 15) much faster than the baseline. In game 1, the Story Shaping KGA2C agent converged to the optimal policy after trained for around 10000 steps despite a temporary drop in average scores around step 30000, while the base- line agent took around 90000 training steps to learn the optimal policy, according to figure 4. Moreover, almost 4 1â 7 Wei Zhou et al. CEUR Workshop Proceedings | 2307.15833#15 | 2307.15833#17 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#17 | Dialogue Shaping: Empowering Agents through NPC Interaction | at all the training steps, the standard deviation score range of the Story Shaping agent is disjoint from that of the baseline, meaning that the Story Shaping agent can consistently achieve higher score than the baseline. Game 1 Average Testing Score and Standard Deviation == sory shaping 160600 Figure 4: Average and standard deviation of the test scores throughout 100000 training steps for Game 1. The standard deviation is represented as the shaded area around the lines. # 6. Conclusions Through evaluation of our technique across a range of text games, we have shown that the dialogue module is able to extract key game information which might take a traditional action based RL agent tens of thousands of steps to learn. Moreover, we show that the dialogue module is able to pass along those key information and guide the action agent through knowledge graph and Story Shaping technique effectively and reliably. Thus, we have proven the substantial potential of the dialogue component to greatly speed up RL agentâ s convergence to the optimal policy. Future work might further exploit this potential by exploring approaches like few-shot prompt- ing or finetuning LLMs to more effectively retrieve useful information from the NPC. | 2307.15833#16 | 2307.15833#18 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#18 | Dialogue Shaping: Empowering Agents through NPC Interaction | 5 1â 7 Wei Zhou et al. CEUR Workshop Proceedings # References [1] X. Peng, C. Cui, W. Zhou, R. Jia, M. Riedl, Story shaping: Teaching agents human-like behavior with stories, arXiv preprint arXiv:2301.10107 (2023). [2] OpenAI, Chatgpt: A large-scale open-domain chat- OpenAI, Chatgpt: A large-scale open-domain chat- bot, https://openai.com/blog/chatgpt/, 2022. bot, https://openai.com/blog/chatgpt/, 2022. [3] S. Yao, R. Rao, M. Hausknecht, K. Narasimhan, Keep calm and explore: Language models for action generation in text-based games, 2020. arXiv:2010.02903. Jain, S. Humeau, E. Dinan, T. Rocktäschel, D. Kiela, A. Szlam, J. Weston, Learning to speak and act in a fantasy text adventure game, arXiv preprint arXiv:1903.03094 (2019). [4] J. Urbanek, A. Fan, S. Karamcheti, S. | 2307.15833#17 | 2307.15833#19 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#19 | Dialogue Shaping: Empowering Agents through NPC Interaction | Playing text- adventure games with graph-based deep reinforce- ment learning, arXiv preprint arXiv:1812.01628 (2018). [5] P. Ammanabrolu, M. O. Riedl, [6] P. Ammanabrolu, W. Cheung, D. Tu, W. Broniec, M. Riedl, Bringing stories alive: Generating inter- active fiction worlds, in: Proceedings of the AAAI Conference on Artificial Intelligence and Interac- tive Digital Entertainment, volume 16, 2020, pp. 3â 9. [7] P. Ammanabrolu, M. Hausknecht, Graph learning for natu- arXiv preprint constrained reinforcement ral arXiv:2001.08837 (2020). language action spaces, [7] P. Ammanabrolu, M. Hausknecht, [8] P. Ammanabrolu, E. Tien, M. Hausknecht, M. O. Riedl, How to avoid being eaten by a grue: Struc- tured exploration strategies for textual worlds, arXiv preprint arXiv:2006.07409 (2020). [9] Y. Xu, M. Fang, L. Chen, Y. Du, J. T. Zhou, C. Zhang, Deep reinforcement learning with stacked hierar- chical attention for text-based games, Advances in Neural Information Processing Systems 33 (2020). Inherently explainable reinforcement learning in natural lan- guage, Advances in Neural Information Processing Systems 35 (2022) 16178â 16190. [10] X. Peng, M. Riedl, P. Ammanabrolu, [11] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lill- icrap, T. Harley, D. Silver, K. Kavukcuoglu, Asyn- chronous methods for deep reinforcement learning, in: International conference on machine learning, PMLR, 2016, pp. 1928â | 2307.15833#18 | 2307.15833#20 | 2307.15833 | [
"2301.10107"
]
|
2307.15833#20 | Dialogue Shaping: Empowering Agents through NPC Interaction | 1937. 6 1â 7 | 2307.15833#19 | 2307.15833 | [
"2301.10107"
]
|
|
2307.15818#0 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 3 2 0 2 l u J 8 2 ] O R . s c [ 1 v 8 1 8 5 1 . 7 0 3 2 : v i X r a â o) Google DeepMind https://robotics-transformer2.github.io 2023-8-1 2023-8-1 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich Google DeepMind. | 2307.15818#1 | 2307.15818 | [
"2304.02643"
]
|
|
2307.15818#1 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Authors listed in alphabetical order, with contributions listed in Appendix A. We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control to boost generalization and enable emergent semantic reasoning. Our goal is to enable a single end-to-end trained model to both learn to map robot observations to actions and enjoy the benefits of large-scale pretraining on language and vision-language data from the web. To this end, we propose to co-fine-tune state-of-the-art vision-language models on both robotic trajectory data and Internet-scale vision-language tasks, such as visual question answering. In contrast to other approaches, we propose a simple, general recipe to achieve this goal: in order to fit both natural language responses and robotic actions into the same format, we express the actions as text tokens and incorporate them directly into the training set of the model in the same way as natural language tokens. We refer to such category of models as vision-language-action models (VLA) and instantiate an example of such a model, which we call RT-2. Our extensive evaluation (6k evaluation trials) shows that our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training. This includes significantly improved generalization to novel objects, the ability to interpret commands not present in the robot training data (such as placing an object onto a particular number or icon), and the ability to perform rudimentary reasoning in response to user commands (such as picking up the smallest or largest object, or the one closest to another object). We further show that incorporating chain of thought reasoning allows RT-2 to perform multi-stage semantic reasoning, for example figuring out which object to pick up for use as an improvised hammer (a rock), or which type of drink is best suited for someone who is tired (an energy drink). | 2307.15818#0 | 2307.15818#2 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#2 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 1. Introduction High-capacity models pretrained on broad web-scale datasets provide an effective and powerful platform for a wide range of downstream tasks: large language models can enable not only fluent text generation (Anil et al., 2023; Brohan et al., 2022; OpenAI, 2023) but emergent problem-solving (Cobbe et al., 2021; Lewkowycz et al., 2022; Polu et al., 2022) and creative generation of prose (Brown et al., 2020; OpenAI, 2023) and code (Chen et al., 2021), while vision-language models enable open-vocabulary visual recognition (Kirillov et al., 2023; Minderer et al., 2022; Radford et al., 2021) and can even make complex inferences about object-agent interactions in images (Alayrac et al., 2022; Chen et al., 2023a,b; Driess et al., 2023; Hao et al., 2022; Huang et al., 2023; Wang et al., 2022). Such semantic reasoning, problem solving, and visual interpretation capabilities would be tremendously useful for generalist robots that must perform a variety of tasks in real-world environments. | 2307.15818#1 | 2307.15818#3 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#3 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | However, Corresponding author(s): [email protected], [email protected], [email protected] © 2023 Google DeepMind. All rights reserved RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control Figure 1 | RT-2 overview: we represent robot actions as another language, which can be cast into text tokens and trained together with Internet-scale vision-language datasets. During inference, the text tokens are de-tokenized into robot actions, enabling closed loop control. This allows us to leverage the backbone and pretraining of vision-language models in learning robotic policies, transferring some of their generalization, semantic understanding, and reasoning to robotic control. We demonstrate examples of RT-2 execution on the project website: robotics-transformer2.github.io. it is unclear how robots should acquire such capabilities. While a brute force approach might entail collecting millions of robotic interaction trials, the most capable language and vision-language models are trained on billions of tokens and images from the web (Alayrac et al., 2022; Chen et al., 2023a,b; Huang et al., 2023) â an amount unlikely to be matched with robot data in the near future. On the other hand, directly applying such models to robotic tasks is also difficult: such models reason about semantics, labels, and textual prompts, whereas robots require grounded low-level actions, such as Cartesian end-effector commands. While a number of recent works have sought to incorporate language models (LLMs) and vision-language models (VLMs) into robotics (Ahn et al., 2022; Driess et al., 2023; Vemprala et al., 2023), such methods generally address only the â higher levelâ aspects of robotic planning, essentially taking the role of a state machine that interprets commands and parses them into individual primitives (such as picking and placing objects), which are then executed by separate low-level controllers that themselves do not benefit from the rich semantic knowledge of Internet-scale models during training. Therefore, in this paper we ask: can large pretrained vision- language models be integrated directly into low-level robotic control to boost generalization and enable emergent semantic reasoning? | 2307.15818#2 | 2307.15818#4 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#4 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | To this end, we explore an approach that is both simple and surprisingly effective: we directly train vision-language models designed for open-vocabulary visual question answering and visual dialogue to output low-level robot actions, along with solving other Internet-scale vision-language tasks. Although such models are typically trained to produce natural language tokens, we can train them on robotic trajectories by tokenizing the actions into text tokens and creating â multimodal sentencesâ (Driess et al., 2023) that â respondâ to robotic instructions paired with camera observations by producing corresponding actions. In this way, vision-language models can be directly trained to act as instruction following robotic policies. This simple approach is in contrast with prior alternatives for incorporating VLMs into robot policies (Shridhar et al., 2022a) or designing new vision-language- action architectures from scratch (Reed et al., 2022): instead, pre-existing vision-language models, with already-amortized significant compute investment, are trained without any new parameters to output text-encoded actions. We refer to this category of models as vision-language-action (VLA) models. We instantiate VLA models by building on the protocol proposed for RT-1 (Brohan et al., 2022), using a similar dataset, but expanding the model to use a large vision-language backbone. Hence we refer to our model as RT-2 (Robotics Transformer 2). We provide an overview in Figure 1. 2 # RT-2: | 2307.15818#3 | 2307.15818#5 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#5 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Vision-Language-Action Models Transfer Web Knowledge to Robotic Control We observe that robotic policies derived from such vision-language models exhibit a range of remarkable capabilities, combining the physical motions learned from the robot data with the ability to interpret images and text learned from web data into a single model. Besides the expected benefit of dramatically improving generalization to novel objects and semantically varied instructions, we observe a number of emergent capabilities. While the modelâ s physical skills are still limited to the distribution of skills seen in the robot data, the model acquires the ability to deploy those skills in new ways by interpreting images and language commands using knowledge gleaned from the web. Some example highlights are shown in Figure 2. The model is able to re-purpose pick and place skills learned from robot data to place objects near semantically indicated locations, such as specific numbers or icons, despite those cues not being present in the robot data. The model can also interpret relations between objects to determine which object to pick and where to place it, despite no such relations being provided in the robot demonstrations. Furthermore, if we augment the command with chain of thought prompting, the model is able to make even more complex semantic inferences, such as figuring out which object to pick up for use as an improvised hammer (a rock), or which type of drink is best suited for someone who is tired (an energy drink). Our main contribution is RT-2, a family of models derived from fine-tuning large vision-language models trained on web-scale data to directly act as generalizable and semantically aware robotic policies. Our experiments investigate models with up to 55B parameters trained on Internet data and instruction-annotated robotic trajectories from previous work (Brohan et al., 2022). Over the course of 6k robotic evaluations, we show that RT-2 enable significant improvements to generalization over objects, scenes, and instructions, and exhibit a breadth of emergent capabilities inherited from web-scale vision-language pretraining. | 2307.15818#4 | 2307.15818#6 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#6 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 2. Related Work Vision-language models. There are several categories of Vision-Language Models (VLMs) (Gan et al., 2022), with perhaps two most relevant: (1) representation-learning models, e.g. CLIP (Radford et al., 2021), which learn common embeddings for both modalities, and (2) visual language models of the form {vision, text} â {text} which learn to take vision and language as input and provide free-form text. Both categories have been used to provide pretraining for a wide variety of applied to downstream applications such as object classification (Radford et al., 2021), detection (Gu et al., 2021), and segmentation (Ghiasi et al., 2021). In this work, we focus on the latter category (Alayrac et al., 2022; Chen et al., 2023a,b; Driess et al., 2023; Hao et al., 2022; Li et al., 2023, 2019; Lu et al., 2019). These models are generally trained on many different tasks, such as image captioning, vision-question answering (VQA), and general language tasks on multiple datasets at the same time. While prior works study VLMs for a wide range of problems and settings including in robotics, our focus is on how the capabilities of VLMs can be extended to robotics closed-loop control by endowing them with the ability to predict robot actions, thus leveraging the knowledge already present in VLMs to enable new levels of generalization. Generalization in robot learning. Developing robotic controllers that can broadly succeed in a variety of scenarios is a long-standing goal in robotics research (Kaelbling, 2020; Smith and Coles, 1973). A promising approach for enabling generalization in robotic manipulation is by learning from large and diverse datasets (Dasari et al., 2019; Levine et al., 2018; Pinto and Gupta, 2016). | 2307.15818#5 | 2307.15818#7 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#7 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | By doing so, prior methods have demonstrated how robots can generalize to novel object instances (Finn and Levine, 2017; Levine et al., 2018; Mahler et al., 2017; Pinto and Gupta, 2016; Young et al., 2021), to tasks involving novel combinations of objects and skills (Dasari and Gupta, 2021; Finn et al., 2017; James et al., 2018; Jang et al., 2021; Yu et al., 2018), to new goals or language instructions (Jang et al., 2021; Jiang et al., 2022; Liu et al., 2022; Mees et al., 2022; Nair et al., 2022a; Pong et al., 3 RT-2: | 2307.15818#6 | 2307.15818#8 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#8 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Vision-Language-Action Models Transfer Web Knowledge to Robotic Control 2019), to tasks with novel semantic object categories (Shridhar et al., 2021; Stone et al., 2023), and to unseen environments (Cui et al., 2022; Du et al., 2023a; Hansen et al., 2020). Unlike most of these prior works, we aim to develop and study a single model that can generalize to unseen conditions along all of these axes. A key ingredient of our approach is to leverage pre-trained models that have been exposed to data that is much broader than the data seen by the robot. Pre-training for robotic manipulation. Pre-training has a long history in robotic learning. Most works focus on pre-trained visual representations that can be used to initialize the encoder of the robotâ s camera observations, either via supervised ImageNet classification (Shah and Kumar, 2021), data augmentation (Kostrikov et al., 2020; Laskin et al., 2020a,b; Pari et al., 2021) or objectives that are tailored towards robotic control (Karamcheti et al., 2023; Ma et al., 2022; Majumdar et al., 2023b; Nair et al., 2022b; Xiao et al., 2022b). Other works have incorporated pre-trained language models, often either as an instruction encoder (Brohan et al., 2022; Hill et al., 2020; Jang et al., 2021; Jiang et al., 2022; Lynch and Sermanet, 2020; Nair et al., 2022a; Shridhar et al., 2022b) or for high-level planning (Ahn et al., 2022; Driess et al., 2023; Huang et al., 2022; Mu et al., 2023; Singh et al., 2023; Wu et al., 2023). Rather than using pre-training vision models or pre-trained language models, we specifically consider the use of pre-trained vision-language models (VLMs), which provide rich, grounded knowledge about the world. | 2307.15818#7 | 2307.15818#9 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#9 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Prior works have studied the use of VLMs for robotics (Driess et al., 2023; Du et al., 2023b; Gadre et al., 2022; Karamcheti et al., 2023; Shah et al., 2023; Shridhar et al., 2021; Stone et al., 2023), and form part of the inspiration for this work. These prior approaches use VLMs for visual state representations (Karamcheti et al., 2023), for identifying objects (Gadre et al., 2022; Stone et al., 2023), for high-level planning (Driess et al., 2023), or for providing supervision or success detection (Du et al., 2023b; Ma et al., 2023; Sumers et al., 2023; Xiao et al., 2022a; Zhang et al., 2023). While CLIPort (Shridhar et al., 2021) and MOO (Stone et al., 2023) integrate pre-trained VLMs into end-to-end visuomotor manipulation policies, both incorporate significant structure into the policy that limits their applicability. Notably, our work does not rely on a restricted 2D action space and does not require a calibrated camera. Moreover, a critical distinction is that, unlike these works, we leverage VLMs that generate language, and the unified output space of our formulation enables model weights to be entirely shared across language and action tasks, without introducing action-only model layer components. | 2307.15818#8 | 2307.15818#10 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#10 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 3. Vision-Language-Action Models In this section, we present our model family and the design choices for enabling training VLMs to directly perform closed-loop robot control. First, we describe the general architecture of our models and how they can be derived from models that are commonly used for vision-language tasks. Then, we introduce the recipe and challenges of fine-tuning large VLMs that are pre-trained on web-scale data to directly output robot actions, becoming VLA models. Finally, we describe how to make these models practical for robot tasks, addressing challenges with model size and inference speed to enable real-time control. | 2307.15818#9 | 2307.15818#11 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#11 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 3.1. Pre-Trained Vision-Language Models The vision-language models (Chen et al., 2023a; Driess et al., 2023) that we build on in this work take as input one or more images and produce a sequence of tokens, which conventionally represents natural language text. Such models can perform a wide range of visual interpretation and reasoning tasks, from inferring the composition of an image to answering questions about individual objects and their relations to other objects (Alayrac et al., 2022; Chen et al., 2023a; Driess et al., 2023; Huang et al., 2023). Representing the knowledge necessary to perform such a wide range of tasks | 2307.15818#10 | 2307.15818#12 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#12 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 4 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control requires large models and web-scale datasets. In this work, we adapt two previously proposed VLMs to act as VLA models: PaLI-X (Chen et al., 2023a) and PaLM-E (Driess et al., 2023). We will refer to vision-language-action versions of these models as RT-2-PaLI-X and RT-2-PaLM-E. We leverage instantiations of these models that range in size from billions to tens of billions of parameters. We provide a detailed description of the architecture of these two models in Appendix D. | 2307.15818#11 | 2307.15818#13 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#13 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | put strawberry pick up the bag 9 Fy move apple to . place orange in into the correct about to fall Denver Nuggets pick robot matching bowl bowl off the table Be 8 move redbull can move soccer ball move banana to move cup to the pick animal with to H to basketball Germany wine bottle different colour move banana to the sum of two pick land animal plus one move coke can to move coke can to move bag to Taylor Swift x Google Figure 2 | RT-2 is able to generalize to a variety of real-world situations that require reasoning, symbol understanding, and human recognition. | 2307.15818#12 | 2307.15818#14 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#14 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | We study these challenging scenarios in detail in Section 4. # 3.2. Robot-Action Fine-tuning To enable vision-language models to control a robot, they must be trained to output actions. We take a direct approach to this problem, representing actions as tokens in the modelâ s output, which are treated in the same way as language tokens. We base our action encoding on the discretization proposed by Brohan et al. (2022) for the RT-1 model. The action space consists of 6-DoF positional and rotational displacement of the robot end-effector, as well as the level of extension of the robot gripper and a special discrete command for terminating the episode, which should be triggered by the policy to signal successful completion. The continuous dimensions (all dimensions except for the discrete termination command) are discretized into 256 bins uniformly. Thus, the robot action can be represented using ordinals of the discrete bins as 8 integer numbers. In order to use these discretized actions to finetune a vision-language into a vision-language-action model, we need to associate tokens from the modelâ s existing tokenization with the discrete action bins. This requires 5 RT-2: | 2307.15818#13 | 2307.15818#15 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#15 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Vision-Language-Action Models Transfer Web Knowledge to Robotic Control reserving 256 tokens to serve as action tokens. Which tokens to choose depends on the particular tokenization used by each VLM, which we discuss later in this section. In order to define a target for VLM fine-tuning we convert the action vector into a single string by simply concatenating action tokens for each dimension with a space character: â terminate Î posð ¥ Î posð ¦ Î posð § Î rotð ¥ Î rotð ¦ Î rotð § gripper_extensionâ . A possible instantiation of such a target could be: â 1 128 91 241 5 101 127â . | 2307.15818#14 | 2307.15818#16 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#16 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | The two VLMs that we finetune in our experiments, PaLI-X (Chen et al., 2023a) and PaLM-E (Driess et al., 2023), use different tokenizations. For PaLI-X, integers up to 1000 each have a unique token, so we simply associate the action bins to the token representing the corresponding integer. For the PaLM-E model, which does not provide this convenient representation of numbers, we simply overwrite the 256 least frequently used tokens to represent the action vocabulary. It is worth noting that training VLMs to override existing tokens with action tokens is a form of symbol tuning (Wei et al., 2023), which has been shown to work well for VLMs in prior work. Taking the action representation described above, we convert our robot data to be suitable for VLM model fine-tuning, where our inputs include robot camera image and textual task description (using standard VQA format â Q: what action should the robot take to [task instruction]? A:â ), and our output is formatted as a string of numbers/least frequently used tokens representing a robot action. | 2307.15818#15 | 2307.15818#17 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#17 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Co-Fine-Tuning. As we will show in our experiments, a key technical detail of the training recipe that improves robot performance is co-fine-tuning robotics data with the original web data instead of naïve finetuning on robot data only. We notice that co-fine-tuning leads to more generalizable policies since the policies are exposed to both abstract visual concepts from web scale data and low level robot actions during fine-tuning, instead of just robot actions. During co-fine-tuning we balance the ratios of robot and web data in each training batch by increasing the sampling weight on the robot dataset. Output Constraint. One important distinction between RT-2 and standard VLMs is that RT-2 is required to output valid action tokens for execution on the real robot. Thus, to ensure that RT-2 outputs valid action tokens during decoding, we constrain its output vocabulary via only sampling valid action tokens when the model is prompted with a robot-action task, whereas the model is still allowed to output the full range of natural language tokens on standard vision-language tasks. | 2307.15818#16 | 2307.15818#18 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#18 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 3.3. Real-Time Inference The size of modern VLMs can reach tens or hundreds of billions of parameters (Chen et al., 2023a; Driess et al., 2023). The largest model trained in this work uses 55B parameters. It is infeasible to directly run such models on the standard desktop-style machines or on-robot GPUs commonly used for real-time robot control. To the best of our knowledge, our model is the largest ever, by over an order of magnitude, used for direct closed-loop robotic control, and therefore requires a new set of solutions to enable efficient real-time inference. We develop a protocol that allows us to run RT-2 models on robots by deploying them in a multi-TPU cloud service and querying this service over the network. With this solution, we can achieve a suitable frequency of control and also serve multiple robots using the same cloud service. The largest model we evaluated, the 55B parameter RT-2-PaLI-X-55B model, can run at a frequency of 1-3 Hz. The smaller version of that model, consisting of 5B parameters, can run at a frequency of around 5 Hz. | 2307.15818#17 | 2307.15818#19 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#19 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 4. Experiments Our experiments focus on real-world generalization and emergent capabilities of RT-2 and aim to answer the following questions: 6 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control 1. How does RT-2 perform on seen tasks and more importantly, generalize over new objects, backgrounds, and environments? 2. Can we observe and measure any emergent capabilities of RT-2? 3. How does the generalization vary with parameter count and other design decisions? 4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models? 4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models? We evaluate our approach and several baselines with about 6,000 evaluation trajectories in a variety of conditions, which we describe in the following sections. Unless specified otherwise, we use a 7DoF mobile manipulator with the action space described in Sec. 3.2. We also demonstrate examples of RT-2 execution on the project website: robotics-transformer2.github.io. We train two specific instantiations of RT-2 that leverage pre-trained VLMs: (1) RT-2-PaLI-X is built from 5B and 55B PaLI-X (Chen et al., 2023a), and (2) RT-2-PaLM-E is built from 12B PaLM-E (Driess et al., 2023). For training, we leverage the original web scale data from Chen et al. (2023a) and Driess et al. (2023), which consists of visual question answering, captioning, and unstructured interwoven image and text examples. We combine it with the robot demonstration data from Brohan et al. (2022), which was collected with 13 robots over 17 months in an office kitchen environment. Each robot demonstration trajectory is annotated with a natural language instruction that describes the task performed, consisting of a verb describing the skill (e.g., â | 2307.15818#18 | 2307.15818#20 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#20 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | pickâ , â openâ , â place intoâ ) and one or more nouns describing the objects manipulated (e.g., â 7up canâ , â drawerâ , â napkinâ ) (see Appendix B for more details on the used datasets). For all RT-2 training runs we adopt the hyperparameters from the original PaLI-X (Chen et al., 2023a) and PaLM-E (Driess et al., 2023) papers, including learning rate schedules and regularizations. More training details can be found in Appendix E. Baselines. We compare our method to multiple state-of-the-art baselines that challenge different aspects of our method. All of the baselines use the exact same robotic data. To compare against a state-of-the-art policy, we use RT-1 (Brohan et al., 2022), a 35M parameter transformer-based model. To compare against state-of-the-art pretrained representations, we use VC-1 (Majumdar et al., 2023a) and R3M (Nair et al., 2022b), with policies implemented by training an RT-1 backbone to take their representations as input. To compare against other architectures for using VLMs, we use MOO (Stone et al., 2023), which uses a VLM to create an additional image channel for a semantic map, which is then fed into an RT-1 backbone. | 2307.15818#19 | 2307.15818#21 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#21 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | More information is provided in Appendix C. 4.1. How does RT-2 perform on seen tasks and more importantly, generalize over new objects, backgrounds, and environments? (a) Unseen Objects (b) Unseen Backgrounds (c) Unseen Environments Figure 3 | Example generalization scenarios used for evaluation in Figures 4 and 6b and Tables 4 and 6. To evaluate in-distribution performance as well as generalization capabilities, we compare the RT-2-PaLI-X and RT-2-PaLM-E models to the four baselines listed in the previous sections. For the seen tasks category, we use the same suite of seen instructions as in RT-1 (Brohan et al., 2022), which include over 200 tasks in this evaluation: 36 for picking objects, 35 for knocking objects, 35 for placing things upright, 48 for moving objects, 18 for opening and closing various drawers, and 36 for picking out of and placing objects into drawers. | 2307.15818#20 | 2307.15818#22 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#22 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Note, however, that these â in-distributionâ evaluations still vary the placement of objects and factors such as time of day and robot position, requiring the skills to generalize to realistic variability in the environment. 7 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control Figure 3 shows example generalization evaluations, which are split into unseen categories (objects, backgrounds and environments), and are additionally split into easy and hard cases. For unseen objects, hard cases include harder-to-grasp and more unique objects (such as toys). For unseen backgrounds, hard cases include more varied backgrounds and novel objects. Lastly, for unseen environments, hard cases correspond to a more visually distinct office desk environment with monitors and accessories, while the easier environment is a kitchen sink. These evaluations consists of over 280 tasks that focus primarily on pick and placing skills in many diverse scenarios. The list of instructions for unseen categories is specified in Appendix F.2. 100% B R3M 75% @VC-1 BRT-1 ue Moo 25% @RT-2 w/ PaLM-E-12B | @RT-2 w/ PaLI-X-55B 0% Seen Tasks Unseen Unseen Unseen Unseen Objects Backgrounds Environments Average Figure 4 | Overall performance of two instantiations of RT-2 and baselines across seen training tasks as well as unseen evaluations measuring generalization to novel objects, novel backgrounds, and novel environments. Appendix Table 4 details the full results. The evaluation results are shown in Figure 4 and Appendix Table 4. The performance on seen tasks is similar between the RT-2 models and RT-1, with other baselines attaining a lower success rate. The difference between the RT-2 models and the baseline is most pronounced in the various generalization experiments, suggesting that the strength of vision-language-action models lies in transferring more generalizable visual and semantic concepts from their Internet-scale pretraining data. Here, on average, both instantiations of RT-2 perform similarly, resulting in â ¼2x improvement over the next two baselines, RT-1 and MOO, and â ¼6x better than the other baselines. The PaLM-E version of RT-2 seems to perform better than the RT-2-PaLI-X in harder versions of generalization scenarios while under-performing on easier ones, resulting in a similar average performance. Open Source Language Table Benchmark. | 2307.15818#21 | 2307.15818#23 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#23 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | To provide an additional point of comparison using open-source baselines and environments, we leverage the open-source Language-Table simulation environment from Lynch et al. (2022). We co-fine-tune a smaller PaLI 3B model on several prediction tasks, including in-domain VQA tasks, for the Language-Table dataset, and evaluate the resulting policy in simulation. For the action prediction task, we discretize and encode actions as text in the format â X Yâ , where X and Y range between {-10, -9, . . . , +9, +10}, and represent delta 2D cartesian setpoints of the end effector. | 2307.15818#22 | 2307.15818#24 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#24 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Due to its reduced size, the resulting model can run inference at a similar rate (5 Hz) as the other baselines. The results of this experiment are presented in Table 1. We observe a significant performance boost when using our model compared to the baselines, indicating that the VLM-based pre-training together with the expressiveness of the large PaLI model can be beneficial in other scenarios, in this case, simulation with a different robot. We also show qualitative real-world out-of-distribution behaviors behaviors in Figure 5, demonstrating novel pushing tasks and targeting objects not before seen in this environment. More details about the Language Table experiments can be found in Appendix B and D. # 4.2. Can we observe and measure any emergent capabilities of RT-2? In addition to evaluating the generalization capabilities of vision-language-action models, we also aim to evaluate the degree to which such models can enable new capabilities beyond those demonstrated | 2307.15818#23 | 2307.15818#25 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#25 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 8 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Push the ketchup to the blue cube } Push the blue cube to the tabasco a }= Model BC-Zero (Jang et al., 2021) RT-1 (Brohan et al., 2022) LAVA (Lynch et al., 2022) RT-2-PaLI-3B (ours) Language-Table 72 ± 3 74 ± 13 77 ± 4 90 ± 10 Figure 5 | Real-world out-of-distribution behaviors in the Language Table environment. Identical RT-2-PaLI-3B model checkpoint is used as in Tab. 1. Table 1 | Performance on the simulated Language-Table tasks (Lynch and Ser- manet, 2020). | 2307.15818#24 | 2307.15818#26 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#26 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | in the robot data by transferring knowledge from the web. We refer to such capabilities as emergent, in the sense that they emerge by transferring Internet-scale pretraining. We do not expect such transfer to enable new robotic motions, but we do expect semantic and visual concepts, including relations and nouns, to transfer effectively, even in cases where those concepts were not seen in the robot data. Qualitative Evaluations. First, we experiment with our RT-2-PaLI-X model to determine various emergent capabilities transferred from vision-language concepts. We demonstrate some examples of such interactions in Figure 2. We find through our explorations that RT-2 inherits novel capabilities in terms of semantic understanding and basic reasoning in the context of the scene. For example accomplishing the task â | 2307.15818#25 | 2307.15818#27 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#27 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | put strawberry into the correct bowlâ requires a nuanced understanding of not only what a strawberry and bowl are, but also reasoning in the context the scene to know the strawberry should go with the like fruits. For the task â pick up the bag about to fall off the table,â RT-2 demonstrates physical understanding to disambiguate between two bags and recognize the precariously placed object. All the interactions tested in these scenarios have never been seen in the robot data, which points to the transfer of semantic knowledge from vision-language data. Quantitative Evaluations. To quantify these emergent capabilities, we take the top two baselines from the previous evaluations, RT-1 and VC-1, and compare them against our two models: RT-2-PaLI-X and RT-2-PaLM-E. To reduce the variance of these experiment, we evaluate all of the methods using the A/B testing framework (Fisher, 1936), where all four models are evaluated one after another in the exact same conditions. | 2307.15818#26 | 2307.15818#28 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#28 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Weâ split the emergent capabilities of RT-2 into three categories covering axes of reasoning and semantic understanding (with examples of each shown in Appendix Figure 8). The first we term symbol understanding, which explicitly tests whether the RT-2 policy transfers semantic knowledge from vision-language pretraining that was not present in any of the robot data. Example instructions in this category are â move apple to 3â or â push coke can on top of heartâ . The second category we term reasoning, which demonstrates the ability to apply various aspects of reasoning of the underlying VLM to control tasks. These tasks require visual reasoning (â | 2307.15818#27 | 2307.15818#29 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#29 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | move the apple to cup with same colorâ ), math (â move X near the sum of two plus oneâ ), and multilingual understanding (â mueve la manzana al vaso verdeâ ). We refer to the last category as human recognition tasks, which include tasks such as â move the coke can to the person with glassesâ , to demonstrate human-centric understanding and recognition. The full list of instructions used for this evaluation is specified in Appendix F.2. We present the results of this experiment in Figure 6a with all the numerical results in Appendix H.2. We observe that our VLA models significantly outperform the baselines across all categories, with our best RT-2-PaLI-X model achieving more than 3x average success rate over the next best baseline (RT-1). We also note that while the larger PaLI-X-based model results in better symbol understanding, reasoning and person recognition performance on average, the smaller PaLM-E-based model has an edge on tasks that involve math reasoning. We attribute this interesting result to the different pre-training mixture used in PaLM-E, which results in a model that is more capable at math calculation than the mostly visually pre-trained PaLI-X. | 2307.15818#28 | 2307.15818#30 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#30 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 9 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control 80% @ ve-1 80% @ rt-1 oe @ Co-Fine-Tuned @ RT-2 w/ PaLM-E-128 55B @ Fine-Tuned 60% 60% @ RT-2 w/ PaLI-x-55B @ Scratch 8B 40% 40% 5B 20% 20% 0% ox 8 Symbol Reasoning Human Average Unseen Unseen Unseen Average Understanding Recognition Objects Backgrounds Environments 80% @ ve-1 @ rt-1 @ RT-2 w/ PaLM-E-128 60% @ RT-2 w/ PaLI-x-55B 40% 20% 0% Symbol Reasoning Human Average Understanding Recognition 80% oe @ Co-Fine-Tuned 55B @ Fine-Tuned 60% @ Scratch 8B 40% 5B 20% ox 8 Unseen Unseen Unseen Average Objects Backgrounds Environments (a) Performance comparison on various emergent skill evalu- ations (Figure 8) between RT-2 and two baselines. (b) Ablations of RT-2-PaLI-X showcasing the impact of param- eter count and training strategy on generalization. Figure 6 | Quantitative performance of RT-2 across (6a) emergent skills and (6b) size and training ablations. Appendix Tables 5 and 6 detail the full numerical results. # 4.3. How does the generalization vary with parameter count and other design decisions? For this comparison, we use RT-2-PaLI-X model because of its flexibility in terms of the model size (due to the nature of PaLM-E, RT-2-PaLM-E is restricted to only certain sizes of PaLM and ViT models). In particular, we compare two different model sizes, 5B and 55B, as well as three different training routines: training a model from scratch, without using any weights from the VLM pre-training; fine-tuning a pre-trained model using robot action data only; and co-fine-tuning (co-training with fine-tuning), the primary method used in this work where we use both the original VLM training data as well as robotic data for VLM fine-tuning. | 2307.15818#29 | 2307.15818#31 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#31 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Since we are mostly interested in the generalization aspects of these models, we remove the seen tasks evaluation from this set of experiments. The results of the ablations are presented in Figure 6b and Appendix Table 6. First, we observe that training a very large model from scratch results in a very poor performance even for the 5B model. Given this result, we decide to skip the evaluation of an even bigger 55B PaLI-X model when trained from scratch. Second, we notice that co-fine-tuning a model (regardless of its size) results in a better generalization performance than simply fine-tuning it with robotic data. We attribute this to the fact that keeping the original data around the fine-tuning part of training, allows the model to not forget its previous concepts learned during the VLM training. Lastly, somewhat unsurprisingly, we notice that the increased size of the model results in a better generalization performance. | 2307.15818#30 | 2307.15818#32 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#32 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 4.4. Can RT-2 exhibit signs of chain-of-thought reasoning similarly to vision-language models? Inspired by the chain-of-thought prompting method in LLMs (Wei et al., 2022), we fine-tune a variant of RT-2 with PaLM-E for just a few hundred gradient steps to increase its capability of utilizing language and actions jointly with the hope that it will elicit a more sophisticated reasoning behavior. We augment the data to include an additional â | 2307.15818#31 | 2307.15818#33 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#33 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Planâ step, which describes the purpose of the action that the robot is about to take in natural language first, which is then followed by the actual action tokens, e.g. â Instruction: Iâ m hungry. Plan: pick rxbar chocolate. Action: 1 128 124 136 121 158 111 255.â This data augmentation scheme acts as a bridge between VQA datasets (visual reasoning) and manipulation datasets (generating actions). We qualitatively observe that RT-2 with chain-of-thought reasoning is able to answer more so- phisticated commands due to the fact that it is given a place to plan its actions in natural language first. This is a promising direction that provides some initial evidence that using LLMs or VLMs as planners (Ahn et al., 2022; Driess et al., 2023) can be combined with low-level policies in a single VLA model. Rollouts of RT-2 with chain-of-thought reasoning are shown in Figure 7 and in Appendix I. | 2307.15818#32 | 2307.15818#34 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#34 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 10 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control Prompt: Given <img> Instruction: Prompt: Given <img> Instruction: Move all the objects together. Prediction: Plan: move green can near green rice chip bag. Action: 1 128 126 127 135 123 119 127 Action: 1 143 129 123 145 114 115 127 Prompt: Given <img> Instruction: Pick the object that is different from all other objects Prediction: Plan: pick rxbar chocolate. Action: 1 128 129 125 131 125 128 127 Prompt: Given <img> Instruction: Move the green objects together. Plan: ve green can near green rice chip bag. Action: 1 130 129 121 131 127 128 127 Prompt: | 2307.15818#33 | 2307.15818#35 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#35 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Given <img> I need to hammer a nail, what object from the scene might be useful? Prediction: Rocks. Action: 1 129 138 122 132 135 106 127 Figure 7 | Rollouts of RT-2 with chain-of-thought reasoning, where RT-2 generates both a plan and an action. # 5. Limitations Even though RT-2 exhibits promising generalization properties, there are multiple limitations of this approach. First, although we show that including web-scale pretraining via VLMs boosts generalization over semantic and visual concepts, the robot does not acquire any ability to perform new motions by virtue of including this additional experience. | 2307.15818#34 | 2307.15818#36 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#36 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | The modelâ s physical skills are still limited to the distribution of skills seen in the robot data (see Appendix G), but it learns to deploy those skills in new ways. We believe this is a result of the dataset not being varied enough along the axes of skills. An exciting direction for future work is to study how new skills could be acquired through new data collection paradigms such as videos of humans. Second, although we showed we could run large VLA models in real time, the computation cost of these models is high, and as these methods are applied to settings that demand high-frequency control, real-time inference may become a major bottleneck. An exciting direction for future research is to explore quantization and distillation techniques that might enable such models to run at higher rates or on lower-cost hardware. This is also connected to another current limitation in that there are only a small number of generally available VLM models that can be used to create RT-2. We hope that more open-sourced models will become available (e.g. https://llava-vl.github.io/) and the proprietary ones will open up their fine-tuning APIs, which is a sufficient requirement to build VLA models. | 2307.15818#35 | 2307.15818#37 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#37 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | # 6. Conclusions In this paper, we described how vision-language-action (VLA) models could be trained by combining vision-language model (VLM) pretraining with robotic data. We then presented two instantiations of VLAs based on PaLM-E and PaLI-X, which we call RT-2-PaLM-E and RT-2-PaLI-X. These models are co- fine-tuned with robotic trajectory data to output robot actions, which are represented as text tokens. We showed that our approach results in very performant robotic policies and, more importantly, leads to a significantly better generalization performance and emergent capabilities inherited from | 2307.15818#36 | 2307.15818#38 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#38 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 11 RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control web-scale vision-language pretraining. We believe that this simple and general approach shows a promise of robotics directly benefiting from better vision-language models, which puts the field of robot learning in a strategic position to further improve with advancements in other fields. # Acknowledgments We would like to acknowledge Fred Alcober, Jodi Lynn Andres, Carolina Parada, Joseph Dabis, Rochelle Dela Cruz, Jessica Gomez, Gavin Gonzalez, John Guilyard, Tomas Jackson, Jie Tan, Scott Lehrer, Dee M, Utsav Malla, Sarah Nguyen, Jane Park, Emily Perez, Elio Prado, Jornell Quiambao, Clayton Tan, Jodexty Therlonge, Eleanor Tomlinson, Wenxuan Zhou, and the greater Google DeepMind team for their feedback and contributions. | 2307.15818#37 | 2307.15818#39 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#39 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | 12 # RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control # References M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, et al. Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. | 2307.15818#38 | 2307.15818#40 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#40 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, et al. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022. R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817, 2022. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. | 2307.15818#39 | 2307.15818#41 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#41 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901, 2020. D. Cer, Y. Yang, S. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, Y. Sung, B. Strope, and R. Kurzweil. Universal sentence encoder. CoRR, abs/1803.11175, 2018. URL http://arxiv.org/abs/1803.11175. M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al. | 2307.15818#40 | 2307.15818#42 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#42 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. X. Chen, J. Djolonga, P. Padlewski, B. Mustafa, S. Changpinyo, J. Wu, C. R. Ruiz, S. Goodman, X. Wang, Y. Tay, S. Shakeri, M. Dehghani, D. Salz, M. Lucic, M. Tschannen, A. Nagrani, H. Hu, M. Joshi, B. Pang, C. Montgomery, P. Pietrzyk, M. Ritter, A. Piergiovanni, M. Minderer, F. Pavetic, A. Waters, G. Li, I. Alabdulmohsin, L. Beyer, J. Amelot, K. Lee, A. P. Steiner, Y. Li, D. Keysers, A. Arnab, Y. Xu, K. Rong, A. Kolesnikov, M. Seyedhosseini, A. Angelova, X. Zhai, N. Houlsby, and R. | 2307.15818#41 | 2307.15818#43 | 2307.15818 | [
"2304.02643"
]
|
2307.15818#43 | RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control | Soricut. Pali-x: On scaling up a multilingual vision and language model, 2023a. X. Chen, X. Wang, S. Changpinyo, A. Piergiovanni, P. Padlewski, D. Salz, S. Goodman, A. Grycner, B. Mustafa, L. Beyer, A. Kolesnikov, J. Puigcerver, N. Ding, K. Rong, H. Akbari, G. Mishra, L. Xue, A. Thapliyal, J. Bradbury, W. Kuo, M. Seyedhosseini, C. Jia, B. K. Ayan, C. Riquelme, A. Steiner, A. Angelova, X. Zhai, N. Houlsby, and R. | 2307.15818#42 | 2307.15818#44 | 2307.15818 | [
"2304.02643"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.