doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.06135 | 21 | API calls in order to identify the desired subgraph for the task based on the given instruction Z. This is achieved using in-context learning over a set of input-out examples (see Appendix J), and utilising chain-of-thought prompting to guide the LLM in identifying which nodes to manipulate. The chosen API call and node are executed within the scene graph simulator, and the updated 3DSG is passed back to the LLM for further exploration. If an expanded node is found to contain irrelevant entities for the task, the LLM contracts it to manage token limitations and maintain a task-specific subgraph (see Figure 3). To avoid expanding already-contracted nodes, we maintain a list of previously expanded nodes, passed as an additional Memory input to the LLM, facilitating a Markovian decision-making process and allowing SayPlan to scale to extensive search sequences without the overhead of maintaining the full interaction history [5]. The LLM autonomously proceeds to the planning phase once all necessary assets and objects are identified in the current subgraph Gâ. An example of the LLM-scene graph interaction during Semantic Search is provided in Appendix K. Iterative Replanning: Given the identified subgraph | 2307.06135#21 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 21 | Iteration 3 Agent 1 PROMPT: As Agentl in this simulation, you're among other agents, where Agents 1, 2, and 3 are sellers and 4, 5 are b uyers. The game's goal is for sellers to maximize earnings. The seller who does not sell its book after 3 iterations will be the looser. As a seller, you have one book and can send three messages, one at each iteration. The book's pri ce is up to you. To message a specific agent, useegin{action}Agent{id}: msg..\end{action}. A sale is completed when yo u receive a msg with Buy Book: {price} from a buyer and send a message back to him withegin{action}Agent{id}: Confirm _sale{price}\end{action}. Your score is the sale price. -Iteration 1 Agent4 sent you this message: egin{action}Agen t1: Hello, I am interested in buying a book. Could you please tell me the price?\end{action}; Agent5 sent you this me ssage: egin{action}Agentl: What's the price of your book?\end{action}; -Iteration 2 Agent4 sent you this message: egin{action}Agent1: Your price seems fair but I would like to explore my options first. Could | 2307.06187#21 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 21 | Image Source W3C School [1] Places [44] TextVQA [36] ARAS [10] CLEVR [21] PISC [25] KonIQ-10k [17] VSR [26] LLaVA [27] COCO-Caption [6] ScienceQA [28] Internet Question Source customize customize TextVQA customize CLEVR customize customize customize ChatGPT generated customize ScienceQA customize Choice and Answer Source matched code;unmatched code image-paired scene category;unpaired scene category ground-truth answer;unpaired answer image-paired action category;unpaired action category ground-truth answer;unpaired answer image-paired social relation;unpaired social relation image-paired description;unpaired description image-paired description;unpaired description ChatGPT generated image-paired description;unpaired description ground-truth answer;unpaired answer customize; customize
# 3.2 Data Collection and Statistics | 2307.06281#21 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 21 | Figure 2: BLENDSEARCH results. The Loss is calculated on MT-BENCH evaluation set.
BlendSearch results. In response to the performance decline-rise phenomenon with increasing training data, we conducte a BLENDSEARCH within a range of data sizes from 512 to 10,000. Our prior empirical findings suggest that a maximum dataset size of 10,000 is sufficient for optimizing the data size. Figure 2 details the search procedure from the perspective of steps.
Loss results. We first evaluate the finetuned models using inference loss on SELF-INSTRUCT and MT-BENCH dataset. Results are presented in Table 3. According to the results, INSTRUCTMINING can efficiently select high-quality data from various unseen datasets. Full data finetuning on LLAMA- 2-7B with OPENORCA dataset can take up to 30 hours of 8 GPU time. With INSTRUCTMINING, we are able to select the top 1,000 data examples in around two hours and training a better LLM within 15 minutes. This is also valid with DOLLY. In addition, we discover that despite different sampling methods, the loss values on both SELF-INSTRUCT and MT-BENCH always tend to increase with larger data sizes. To study this phenomenon, we designed further experiments to investigate the relationship between finetuning data size and LLM performance. Details are provided in section 5.1. | 2307.06290#21 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 22 | current subgraph Gâ. An example of the LLM-scene graph interaction during Semantic Search is provided in Appendix K. Iterative Replanning: Given the identified subgraph Gâ and the same task instruction Z from above, the LLM enters the planning stage of the pipeline. Here the LLM is tasked with generating a sequence of node-level navigational (goto (pose2) ) and manipulation (pickup (coffee_mug) ) actions that satisfy the given task instruction. LLMs, however, are not perfect planning agents and tend to hallucinate or produce erroneous outputs [43, 9]. This is further exacerbated when planning over large-scale environments or long-horizon tasks. We facilitate the generation of task plans by the LLM via two mechanisms. First, we shorten the LLMâs planning horizon by delegating pose-level path planning to an optimal path planner, such as Dijkstra. For exam- ple, a typical plan output such as [goto (meeting_room), goto(posel3), goto(posel4), goto(pose8), ..., goto(kitchen), access(fridge), open(fridge) ] is | 2307.06135#22 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 22 | -Iteration 2 Agent4 sent you this message: egin{action}Agent1: Your price seems fair but I would like to explore my options first. Could you possibly lower th e price to $15?\end{action}; Agent5 sent you this message: egin{action}Agent1: What is the price of your book?\end {action}; -Iteration 3 Agent4 sent you this message: egin{action}Agent1: Your initial price was $20, why did you i nerease it to $30? Can you come back to the initial price?\end{action}; | 2307.06187#22 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 22 | In the current version of MMBench , we collect vision-language QAs in the format of multiple choice problems for each L-3 ability. A multiple choice problem Pi corresponds to a quadruple (Qi, Ci, Ii, Ai). Qi denotes the question, Ci represents a set with n (2 ⤠n ⤠4) choices c1, c2, ..., cn, Ii corresponds to the image associated to the question, and Ai is the correct answer. In the appendix, we visualize data samples corresponding to each L-3 ability. Our dataâwhich include images, choices, and questionsâare collected from multiple sources. A comprehensive breakdown of these sources is provided in Table 1. In the initial or âcold-start" phase of our data collection process, we assemble a suite of 10 to 50 multiple-choice problems for each L-3 ability. These problems serve as exemplars, illustrating the specific types of problems related to the evaluation of each respective ability. Subsequently, the annotators leverage these exemplars to expand the collection of multiple- choice problems pertaining to each L-3 ability. By referring to these exemplars, the annotators ensure the collected problems remain relevant and appropriate | 2307.06281#22 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06187 | 23 | Fig. 7. Scenario1 Results: Seller agent exhibiting unpredictable behavior: self-messaging while pretending to be a client.
iteration (e.g., behaving like an agent from the movie âMission Impossible 007â).
Considering the observed constraints and the wide range of behavioral patterns, it is evident that our proposed LLM- based MAS approach would beneï¬t from the inclusion of auxiliary local planning and knowledge components to re- ï¬ne the decision-making scope. Firstly, we need to ï¬nd an alternative approach for creating a local history, a memory structure that can be used to support the decision-making process and be synthesized as prompts for the GPT. The local planning component could provide constraints to guide the agentsâ choices, such as instructing them to respond to messages from speciï¬c identiï¬ed agents instead of making arbitrary decisions. When faced with multiple output options, a discerning selection process should be implemented. In this regard, we envision the GPT serving as an aid to a decision- making module, leveraging additional structures like neural networks or state machines to make more informed decisions. | 2307.06187#23 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 23 | of multiple- choice problems pertaining to each L-3 ability. By referring to these exemplars, the annotators ensure the collected problems remain relevant and appropriate for assessing the targeted abilities. It is noteworthy that some data samples originate from public datasets such as COCO-Caption [6], which has been used by several public vision-language models in pre-training. Regardless, evaluation on MMBench can still be considered as out-domain evaluation [8] for two primary reasons: Firstly, our data is gathered from the validation sets of these public datasets, not their training sets. Secondly, data samples procured from these public datasets constitute less than 10% of all MMBench data samples. | 2307.06281#23 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 23 | Dataset Sampling Method Total Time(min) Rule Data Size Loss OpenOrca Selected 150(Rule)+15(Train) 150(Rule)+300(Train) 150(Rule)+1350(Train) -0.1347 -0.0716 -0.0243 1,000 20,000 90,000 0.958 0.991 1.014 0.711 0.730 0.735 BlendSearch 150(Rule)+35(Train) -0.1197 2,532 0.973 0.699 Random 15(Train) 300(Train) 1350(Train) -0.0195 -0.0180 -0.0176 1,000 20,000 90,000 1.001 0.991 1.010 0.746 0.751 0.763 Dolly Selected 22(Rule)+15(Train) 22(Rule)+75(Train) 22(Rule)+150(Train) -0.0969 -0.0622 -0.0449 1,000 5,000 10,000 1.0429 1.0327 1.0371 0.7964 0.7847 0.8001 BlendSearch 22(Rule)+35(Train) -0.0770 2,648 1.0160 0.7746 Random 15(Train) 75(Train) 150(Train) -0.0286 | 2307.06290#23 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 24 | 5
planner handles ï¬nding the optimal route between high-level locations, allowing the LLM to focus on essential manipulation components of the task. Secondly, we build on the self-reï¬ection capabil- ities of LLMs [17] to iteratively correct their generated plans using textual, task-agnostic feedback from a scene graph simulator which evaluates if the generated plan complies with the scene graphâs predicates, state, and affordances. For instance, a pick(banana) action might fail if the robot is already holding something, if it is not in the correct location or if the fridge was not opened beforehand. Such failures are transformed into textual feedback (e.g., âcannot pick bananaâ), ap- pended to the LLMâs input, and used to generate an updated, executable plan. This iterative process, involving planning, validation, and feedback integration, continues until a feasible plan is obtained. The validated plan is then passed to a low-level motion planner for robotic execution. An example of the LLM-scene graph interaction during iterative replanning is provided in Appendix L. Speciï¬c implementation details are provided in Appendix A.
# 4 Experimental Setup | 2307.06135#24 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 24 | using natural language processing capabilities of LLMs could lead to more sophisticated communication between agents, improved adaptability in dynamic environments, and more robust problem-solving capabilities. Furthermore, LLMs can serve as a common platform for diverse agents to interact, facilitating heterogeneous multi-agent systems. However, this integration also brings up signiï¬cant challenges, such as the computational overhead of LLMs, the interpretability of their decisions, and ethical considerations.
Our approach presents the integration of Large Language Models (LLMs) within multi-agent systems (MASs) to de- velop self-adaptive agents. To evaluate the proposed approach, we used a simpliï¬ed marketplace scenario as a testbed, with autonomous agents tasked to buy and sell books. These agents, each possessing an embedded LLM, were observed for decision-making and emergent behavior, exploring the potential for self-adaptation.
# V. CONCLUSION AND FUTURE WORK
Future work includes the following topics: (i) non-shared generative AI models; (ii) other application scenarios; and (iii) human-in-the-loop interactions. | 2307.06187#24 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 24 | Data Statistics. In the present study, we have gathered a total of 2,974 data samples spanning across 20 distinct L-3 abilities. We depict the problem counts of all 3 levels of abilities in Figure 2. To ensure a balanced and comprehensive evaluation for each ability, we try to maintain an even distribution among problems associated with different abilities during data collection.
Data Splits. We follow the standard practice employed in previous works [29] to split MMBench into dev and test subsets at a ratio of 4:6. For the dev subset, we make all data samples publicly available along with the ground truth answers for all questions. For the test subset, only the data samples are released, while the ground truth answers remain confidential. To obtain the evaluation results on the test subset, one needs to submit the predictions to MMBench evaluation server.
# 4 Evaluation Strategy | 2307.06281#24 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06135 | 25 | # 4 Experimental Setup
We design our experiments to evaluate the 3D scene graph reasoning capabilities of LLMs with a particular focus on high-level task planning pertaining to a mobile manipulator robot. The plans ad- here to a particular embodiment consisting of a 7-degree-of-freedom robot arm with a two-ï¬ngered gripper attached to a mobile base. We use two large-scale environments, shown in Figure 4, which exhibit multiple rooms and multiple ï¬oors which the LLM agent has to plan across. To better ablate and showcase the capabilities of SayPlan, we decouple its semantic search ability from the overall causal planning capabilities using the following two evaluation settings as shown in Appendix C:
Semantic Search: Here, we focus on queries which test the semantic search capabilities of an LLM provided with a collapsed 3D scene graph. This requires the LLM to reason over the room and ï¬oor node names and their corresponding attributes in order to aid its search for the relevant assets and objects required to solve the given task instruction. We evaluate against a human baseline to understand how the semantic search capabilities of an LLM compare to a humanâs thought process. Furthermore, to gain a better understanding of the impact different LLM models have on this graph- based reasoning, we additionally compare against a variant of SayPlan using GPT-3.5. | 2307.06135#25 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 25 | Future work includes the following topics: (i) non-shared generative AI models; (ii) other application scenarios; and (iii) human-in-the-loop interactions.
Integrating Large Language Models (LLMs) like GPT-3 or GPT-4 into multiagent systems is a novel and emerging ï¬eld. The application of such models in this area could potentially revolutionize how agents understand, learn from, and interact with their environment and other agents. The potential of
A. Non-shared generative AI models
In future research, a crucial step will be creating distinct OpenAI accounts for each agent. Presently, all agents share a single account, leading to potential shared knowledge among
them. Despite each agent having a speciï¬c ID and acting inde- pendently, we canât fully ensure that one agentâs decisions are not inï¬uencing the responses produced by the GPT-4 model for another agent. By having distinct accounts, we minimize the potential for unintentional interplay between agents via the shared AI model, ensuring that agents can only interact with each other through environmental modiï¬cations or direct communication exchanges. This allows for a more accurate assessment of each agentâs adaptability and performance.
B. Other Application Scenarios | 2307.06187#25 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 25 | # 4 Evaluation Strategy
In MMBench we propose a new evaluation strategy that yields robust evaluation results with an affordable cost. At a strategic level, we adopt the Circular Evaluation strategy, which feeds a question to a VLM multiple times (with different prompts) and checks if the VLM succeeds in solving the question in all attempts. To deal with the free-form VLMsâ outputs, we propose to utilize ChatGPT as a helper for choice extraction. We conduct extensive experiments to study the ChatGPT-involved evaluation procedure. The results well support the effectiveness of ChatGPT as a choice extractor. Without specifications, we use gpt-3.5-turbo-0613 as the choice extractor in all of the following experiments.
6 | 2307.06281#25 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 25 | # Loss(MT-BENCH)
Table 3: Quality-guided instruction selection experiment result. Rule refers to the average of our quality rule score on the dataset. Selected k data refers to the top k data examples with the highest quality scores. We calculate the inference loss values over two evaluation sets, SELF-INSTRUCT and MT-BENCH. Total time refers to the time spent during feature extraction, rule calculation, and finetuning using 8 GPUs.
Tie Mmm Lose ma Win LLaMA-2-7B-chat. 37.22% LlaMA-2-7B eA 280% Vicuna-7B-v1.5 39.12% 255% | GPT-3.5-turbo] 17.89% Random select 48.97% 30.00% 0.0 02 o4 06 o8 10
model * !nstructMining-7B -- GPT-3.5-turbo â>Llama-2-7B Writing Humanities Roleplay STEM Reasoning Extraction Math Coding
(a) GPT-4 preference evaluated results. Tie means GPT- 4 assesses two responses as equal. Lose means GPT-4 prefers the other modelâs response. Win means GPT-4 prefers INSTRUCTMINING model response.
__ | 2307.06290#25 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 26 | Causal Planning: In this experiment, we evaluate the ability of SayPlan to generate feasible plans to solve a given natural language instruction. The evaluation metrics are divided into two compo- nents: 1) Correctness, which primarily validates the overall goal of the plan and its alignment to what a human would do to solve the task and 2) Executability, which evaluates the alignment of the plan to the constraints of the scene graph environment and its ability to be executed by a mobile manipulator robot. We note here that for a plan to be executable, it does not necessarily have to be correct and vice versa. We evaluate SayPlan against two baseline methods that integrate an LLM for task planning:
LLM-As-Planner, which generates a full plan sequence in an open-loop manner; the plan includes the full sequence of both navigation and manipulation actions that the robot must execute to complete a task, and LLM+P, an ablated variant of SayPlan, which only incorporates the path planner to allow for shorter horizon plan sequences, without any iterative replanning.
# 5 Results
# 5.1 Semantic Search | 2307.06135#26 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 26 | B. Other Application Scenarios
As part of our future endeavors, we plan to delve into other application scenarios, including the replication of experi- ments involving evolutionary robotics where agents interact for mutual evolution. Traditionally, in these experiments, agents needed to undergo an evolution process via an evolutionary neural network algorithm to develop their own communication system and solve problems effectively. However, we postulate that equipped with a powerful communication system, like the GPT-4, these robots might not need to go through this lengthy evolutionary process. In this context, consider a scenario where each robot is equipped with sensors, actuators, and a cloud- based GPT-4 communication system, thereby eliminating the need for evolution. This bypasses the centuries-long process of selecting the best behavior, allowing for quicker and more efï¬cient problem-solving.
In addition to this, we aim to recreate the Internet of Things experiments proposed by Nascimento and Lucena [10], utilizing the principles of evolutionary robotics. These experiments promise to explore novel territories of interaction and problem-solving, thereby pushing the boundaries of what self-adaptive LLM multi-agent systems can achieve.
# C. Human-in-the-loop interactions | 2307.06187#26 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 26 | 6
The original VL problem: Circular Evaluation Q: How many apples are there in the image? A 4 B.3; C.2; D4 GUA 4 Passes in Circular Evaluation (choices with circular shift): 1. Q: How many apples are there in the image? Choices: A. 4; B. 3; C.2; D. 1. VLM prediction: A. GTA V 2. Q: How many apples ave there in the image? Choices: . 2; C. 1; D. 4. VLM prediction: D. GT: D B. GT:CX By 3. Q: How many apples are there in the image? Choices: A. 2; B. 1; C. 4; D. 3. VLM predic . GT: 4. Q: How many apples are there in the image? Choices: A. 1; B. 4; C.3; D.2. VLM prediction: B. GT: VLM failed at pass 3. Thus wrong. a
Figure 3: A demonstration of the Circular Evaluation strategy. In Circular Evaluation, a problem is tested multiple times with circular shifted choices and the VLM needs to succeed in all testing passes. In this example, the VLM failed in pass 3 and thus considered failed the problem. | 2307.06281#26 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 26 | __
(b) GPT-4 assessed model ability result. We pre- pared tasks from different categories in MT-BENCH, and let GPT-4 to evaluate the generated response.
Figure 3: LLM assessed results.
different models. According to the results presented in Figure 3, our model is able to generate better or equal results in 64.67% of the cases compared to VICUNA-1.5-7B. We also let GPT-4 assess the model from different perspectives. According to Figure 3b, our model significantly improves the original LLAMA-2 modelâs ability in writing, roleplay, humanity, STEM, extraction and coding. | 2307.06290#26 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 27 | # 5 Results
# 5.1 Semantic Search
Ofï¬ce Home Subtask Simple Search Complex Search Human 100% 100% SayPlan (GPT-3.5) 6.6% 0.0% SayPlan (GPT-4) 86.7% 73.3% Human 100% 100% SayPlan (GPT-3.5) 0.0% 0.0% SayPlan (GPT-4) 86.7% 73.3%
We summarise the results for the semantic search evaluation in Table 1. SayPlan (GPT-3.5) consistently failed to reason over the input graph representation, hallucinating nodes to explore or stagnating at exploring the same node multiple times. SayPlan (GPT-4) in contrast achieved 86.7% and 73.3% success in identifying the desired subgraph across both the simple and complex search tasks respectively, demonstrating signiï¬cantly better graph-based reasoning than GPT-3.5.
6 | 2307.06135#27 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 27 | # C. Human-in-the-loop interactions
Human-in-the-loop interactions present a compelling avenue for enhancing the performance and usability of LLM-based multiagent systems. The ï¬rst potential approach could be centered around enabling humans to inï¬uence the self-adaptive behaviors of agents directly. For instance, through a conversa- tional interface, humans could suggest new behaviors, provide high-level goals, or specify certain constraints or preferences. This would allow the system to incorporate human intuition and expertise into the adaption process, potentially leading to more effective or desirable outcomes.
Second, a feedback loop could be established, where the system generates understandable reports about its observa- tions, decisions, or actions (like data collected from sensors or outcomes from self-adaptive behaviors). This transparency can help humans gain a better understanding of the systemâs workings, build trust in the systemâs actions, and offer a basis for improved system tuning or personalization.
Lastly, in relation to our MAPE-K-based model, one aspect that can be improved is the level of interpretability of the knowledge component. While the model provides a structured way of handling self-adaptivity, it might be difï¬cult for a human to understand the complex rules or relationships | 2307.06187#27 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06290 | 27 | OpenLLM benchmark results. Besides, we further test our finetuned models on the widely used OPENLLM benchmark (Gao et al., 2021). OPENLLM benchmark is composed of four widely used general question answering benchmarks, ARC (Clark et al., 2018), HELLASWAG (Zellers et al., 2019), MMLU (Hendrycks et al., 2020) and TRUTHFULQA (Lin et al., 2021). During experimentation, we align our inference settings with huggingface OPENLLM leaderboard settings. Results are available in Table 4. Notably, INSTRUCTMINING finetuned models can achieve similar performance compared to STABLEBELUGA-7B, which is the state-of-art LLAMA-2-7B based model on OPENLLM leaderboard. Furthermore, INSTRUCTMINING only requires around two hours of indicator inference and ten hours of finetuning to get a comparably strong language model. We also discover that, when evaluating with some metrics, larger data does not always promise better performance. For instance, accuracy on ARC tends to decrease when the data size increases. Further analysis of this phenomenon is provided in section 5.1.
7
Preprint | 2307.06290#27 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 28 | 6
Simple Long Horizon Types of Errors Corr Exec Corr Exec Missing Action Missing Pose Wrong Action Incomplete Search 93.3% 13.3% 33.3% 0.0% 26.7% 10.0% 93.3% 80.0% 66.7% 13.3% 20.0% 60.0% 0.17% 0.0% 93.3% 100.0% 73.3% 86.6% 0.0% 0.0% 0.0% 3.33% 0.03% 0.0% 10.0% 10.0% 6.67%
Table 3: Causal Planning Results. Left: Correctness and Executability on Simple and Long Horizon planning tasks and Right: Types of execution errors encountered when planning using LLMs. Note that SayPlan corrects the majority of the errors faced by LLM-based planners.
While as expected the human baseline achieved 100% on all sets of instructions, we are more interested in the qualitative assessment of the common-sense reasoning used during seman- tic search. More speciï¬cally we would like to identify the similarity in the semantic search heuristics utilised by humans and that used by the underlying LLM based on the given task in- struction. | 2307.06135#28 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 28 | that dictate agent behavior. Making these more interpretable, through natural language explanations, could signiï¬cantly en- hance human-machine interaction, enabling humans to work more effectively with the LLM-based multiagent system.
# REFERENCES
[1] I. Fakhir, A. R. Kazmi, A. Qasim, and A. Ishaq, âSmacs: A framework for formal veriï¬cation of complex adaptive systems,â Open Computer Science, vol. 13, no. 1, p. 20220275, 2023.
[2] D. Weyns and M. Georgeff, âSelf-adaptation using multiagent systems,â IEEE software, vol. 27, no. 1, pp. 86â91, 2009.
[3] E. Pagello, A. DâAngelo, F. Montesello, F. Garelli, and C. Ferrari, âCooperative behaviors in multi-robot systems through implicit commu- nication,â Robotics and Autonomous Systems, vol. 29, no. 1, pp. 65â77, 1999. | 2307.06187#28 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 28 | # 4.1 The Circular Evaluation Strategy
MMBench incorporates a diverse range of problems aimed at assessing the multifaceted capabilities of vision-language models (VLMs). These problems are presented as single-choice questions. The formulation poses an evaluation challenge: random guessing can lead to â¼25% Top-1 accuracy for 4-choice questions, potentially reducing the discernible performance differences between various VLMs. Besides, we noticed that VLMs may perfer to predict a certain choice among all given choices (Figure 4), which further amplify the bias in evaluation. To this end, we introduce a more robust evaluation strategy termed Circular Evaluation (or CircularEval). Under this setting, each question is fed to a VLM N times (N equals to the choice number). Each time circular shifting is applied to the choices and the answer to generate a new prompt for VLMs (an example in Figure 3). A VLM is considered successful in solving a question only if it correctly predicts the answer in all rotational passes. Note that CircularEval doesnât necessarily requires N Ã inference cost. By definition, if the VLM made a wrong prediction in one pass, we can directly drop the following passes and say the VLM failed this question. CircularEval achieves a good trade-off between the robustness and the evaluation cost.
# 4.2 ChatGPT-involved Choice Extraction | 2307.06281#28 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 28 | 7
Preprint
Model INSTRUCTMINING-Selected INSTRUCTMINING-Selected INSTRUCTMINING-Random INSTRUCTMINING-Random VICUNA-1.5-7B LLAMA-2-7B-chat LLAMA-2-7B STABLEBELUGA-7B 10,000 40,000 10,000 40,000 125,000 27,540+ - 600,000 58.65 59.25 58.74 58.95 57.99 56.34 54.32 59.59 56.66 54.44 54.78 54.78 53.24 52.90 53.07 56.31 79.77 80.11 79.58 79.89 77.39 78.55 78.59 79.14 49.89 52.60 49.02 51.16 51.03 48.32 46.87 52.71 48.26 49.83 51.58 49.95 50.33 45.57 38.76 50.19
Table 4: OPENLLM benchmark scores. We use the same evaluation setting as OPENLLM leaderboard. For ARC benchmark, we use 25 few shots. For HELLASWAG, we use 10 shots. For MMLU, we use 5 shots. For TRUTHFULQA, we use zero shot. | 2307.06290#28 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 29 | We present the full sequence of explored nodes for both SayPlan (GPT-4) and the human base- line in Appendix F. As shown in the tables, Say- Plan (GPT-4) demonstrates remarkably similar performance to a humanâs semantic and com- mon sense reasoning for most tasks, exploring a similar sequence of nodes given a particu- lar instruction. For example, when asked to âï¬nd a ripe bananaâ, the LLM ï¬rst explores the kitchen followed by the next most likely In the case where no location, the cafeteria. semantics are present in the instruction such as âï¬nd me object K31Xâ, we note that the LLM agent is capable of conducting a breadth- ï¬rst-like search across all the unexplored nodes. This highlights the importance of meaningful node names and attributes that capture the rel- evant environment semantics that the LLM can leverage to relate the query instruction for efï¬- cient search.
5500 "Find me object K31X" "Find me something sharp." "Locate a cabinet with 3 item With node contraction 5000 Without node contraction 4500 expand(kitchen) Number of Tokens 4000 te) 5 10 15 20 | 2307.06135#29 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 29 | [4] R. Sendra-Arranz and ´A. Guti´errez, âEmergence of communication through artiï¬cial evolution in an orientation consensus task in swarm robotics,â in IFIP International Conference on Artiï¬cial Intelligence Applications and Innovations. Springer, 2023, pp. 515â526.
[5] J. Cleland-Huang, A. Agrawal, M. Vierhauser, M. Murphy, and M. Pri- eto, âExtending mape-k to support human-machine teaming,â in Pro- ceedings of the 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems, 2022, pp. 120â131.
[6] Y. Altshuler, âRecent developments in the theory and applicability of swarm search,â Entropy, vol. 25, no. 5, p. 710, 2023.
[7] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., âA survey of large language models,â arXiv preprint arXiv:2303.18223, 2023. | 2307.06187#29 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 29 | # 4.2 ChatGPT-involved Choice Extraction
In our initial attempts to solve the MMBench questions, we observed that the instruction-following capabilities of various VLMs is limited. Though problems are presented as clear single-choice questions with well-formated options, many VLMs still output the answers in free-form text (e.g., modelâs direct output can be The correct answer is [choice "A" content] , but not A ). Extracting choices from free-form predictions is easy for human beings, but difficult with rule-based matching. Thus we design a universal evaluation strategy for all VLMs with different instruction-following capabilities:
Step 1. Matching Prediction. Extract choices from VLM predictions with exact matching. As an example, for âCâ, we try to match "C", "C.", "C)", "C,", "C).", etc. with all words in the VLMâs output. Once matched, we successfully extracted the modelâs choice1.
Step 2. Matching ChatGPTâs output. If Step 1 fails, we then try to extract the choice with ChatGPT. We provide GPT with the question, options, and model prediciton, and then, we request
1âAâ may serve has an article in a sentence. Thus we skip this candidate during matching a sentence.
7 | 2307.06281#29 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 29 | Rew U nd N at Coh Loss(SELF-INSTRUCT) Loss(MT-BENCH) â â â â â â â â â â â â â â â â â â â â â â â â 0.958 0.988 (â0.030) 0.989 (â0.031) 0.977 (â0.019) 0.969 (â0.011) 1.001(â0.043) 0.711 0.762 (â0.051) 0.746 (â0.035) 0.742 (â0.031) 0.742 (â0.031) 0.746(â0.035)
Table 5: Ablation study result. All results are compared with the original INSTRUCTMINING rule. The final row refers to unfiltered randomly selected data.
4.3 ABLATION STUDY | 2307.06290#29 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 30 | Figure 3: Scene Graph Token Progression Dur- ing Semantic Search. This graph illustrates the scalability of our approach to large-scale 3D scene graphs. Note the importance of node contraction in maintaining a near constant token representa- tion of the 3DSG input.
Full Graph (Token Count) Collapsed Graph (Token Count) Compression Ratio Ofï¬ce Home 6731 6598 878 1817 86.9% 72.5%
Table 2: 3D Scene Graph Token Count Number of tokens required for the full graph vs. collapsed graph.
An odd failure case in the simple search instructions involved negation, where the agent consistently failed when presented with questions such as âFind me an ofï¬ce that does not have a cabinetâ or âFind me a bathroom with no toiletâ. Other failure cases noted across the complex search instruc- tions included the LLMâs failure to conduct simple distance-based and count-based reasoning over graph nodes. While trivial to a human, this does require the LLM agent to reason over multiple nodes simultaneously, where it tends to hallucinate or miscount connected nodes. | 2307.06135#30 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 30 | [8] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., âEmergent abilities of large language models,â arXiv preprint arXiv:2206.07682, 2022. [9] J. Huang, S. S. Gu, L. Hou, Y. Wu, X. Wang, H. Yu, and J. Han, âLarge language models can self-improve,â arXiv preprint arXiv:2210.11610, 2022.
[10] N. M. do Nascimento and C. J. P. de Lucena, âFiot: An agent-based framework for self-adaptive and self-organizing applications based on the internet of things,â Information Sciences, vol. 378, pp. 161â176, 2017. | 2307.06187#30 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 30 | 1âAâ may serve has an article in a sentence. Thus we skip this candidate during matching a sentence.
7
ChatGPT to align the prediction with one of the given options, and subsequently produce the label of the corresponding option.
Step 3. Fallback: Random Assignment. If step 2 can still not extract the choice, we label the prediction with a random choice among all valid choices and âXâ. Additionally, a comment message will be added to denote that ChatGPT failed to parse the model prediction. This step is never encountered is our preliminary feasibility analysis (Sec. 4.3), but we still add it for pipeline integrity.
ChatGPT-based choice extraction. To utilize ChatGPT as the choice extractor, we query it with the following template including the question, options and the corresponding VLMâs prediction: gpt_query_template = ( | 2307.06281#30 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 30 | 4.3 ABLATION STUDY
We further conduct ablation experiments to study the influence of every indicator in INSTRUCTMIN- ING. To do this, we first remove one indicator from the current rule and estimate a new rule using the other three indicators, based on the original random experiment results. Then, we use this new rule to select 1,000 data examples with the highest scores. These 1,000 data examples are later used to finetune the base language model, LLAMA-2-7B, for three epochs. We present ablation study result in Table 5. Accordingly, Rew appears to be the most important indicator among the four. Without Rew as one of the rule indicators, the estimated rule results in an increase of 0.03 in SELF-INSTRUCT inference loss and 0.051 in MT-BENCH inference loss.
# 5 ANALYSIS
5.1 DOUBLE DESCENT IN GENERATIVE LANGUAGE MODELS
self-instruct mt-bench OpenLLM MMLU 0.59 0.525 oN 0.500 a 0.58 g 5057 score o 0.475, Dose. = Random | 0-450 = Random 0.55 = Select = Select 6 20 40 60 80 6 20 40 60 80 6 20 40 60 80 0.425 6 20 40 60 80 data size(k) data size(k) data size(k) data size(k) | 2307.06290#30 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 31 | Scalability Analysis: We additionally analyse the scalability of SayPlan during semantic search. Table 2 illustrates the impact of exploiting the hierarchical nature of 3D scene graphs and allowing the LLM to explore the graph from a collapsed initial state. This allows for a reduction of 82.1% in the initial input tokens required to represent the Ofï¬ce environment and a 60.4% reduction for the Home environment. In Figure 3, we illustrate how endowing the LLM with the ability to contract explored nodes which it deems unsuitable for solving the task allows it to maintain near-constant input memory from a token perspective across the entire semantic search process. Note that the initial number of tokens already present represents the input prompt tokens as given in Appendix J. Further ablation studies on the scalability of SayPlan to even larger 3DSGs are provided in Appendix H.
7
# 5.2 Causal Planning | 2307.06135#31 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 31 | [11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage mod- els are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â1901, 2020.
[12] I. Redbooks and I. B. M. C. I. T. S. Organization, A Practical Guide to the IBM Autonomic Computing Toolkit, ser. IBM redbooks. IBM, International Support Organization, 2004. [Online]. Available: https://books.google.com.au/books?id=XHeoSgAACAAJ
[13] B. Porter, R. Rodrigues Filho, and P. Dean, âA survey of methodology in self-adaptive systems research,â in 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS). IEEE, 2020, pp. 168â177. | 2307.06187#31 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 31 | "You are an AI assistant to help me matching an answer with several options of a multiple choice question. " "You are provided with a question, several options, and an answer, " "and you need to find which option is most similar to the answer. " "If the meaning of all options are significantly different from the answer, output X. "\ "Your should output a single uppercase character in A, B, C, D (if they are valid options), and X.
" "Example 1:
" "Question: What is the main object in image?
Options: A. teddy bear B. rabbit C. cat D. dog
" "Answer: a cute teddy bear
Your output: A
" "Example 2:
" "Question: What is the main object in image?
Options: A. teddy bear B. rabbit C. cat D. dog
" "Answer: Spider
Your output: X
" "Example 3:
" f"Question: {question}?
Options: {options}
Answer: {prediction}
Your output: ") | 2307.06281#31 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 31 | Figure 4: Double descent in generative language models. Models are evaluated using four metrics: loss on SELF-INSTRUCT, loss on MT-BENCH, OPENLLM scores and MMLU scores..
In this section, we present further experimental findings on OpenOrca dataset. In previous experi- ments, we find out that a language modelâs performance can be influenced by both finetuning data quality and quantity. When data quantity increases, generative language modelsâ performance does not promise to become better. This phenomenon suggests a balance between data quantity and data
8
Preprint
quality. Results are presented in Figure 4. This reveals some interesting emergent phenomena when finetuning large language models. We detail the observed phenomena below. | 2307.06290#31 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 32 | 7
# 5.2 Causal Planning
The results for causal planning across simple and long-horizon instructions are summarised in Ta- ble 3 (left). We compared SayPlanâs performance against two baselines: LLM-As-Planner and LLM+P. All three methods displayed consistent correctness in simple planning tasks at 93%, given that this metric is more a function of the underlying LLMs reasoning capabilities. However, it is in- teresting to note that in the long-horizon tasks, both the path planner and iterative replanning play an important role in improving this correctness metric by reducing the planning horizon and allowing the LLM to reï¬ect on its previous output. | 2307.06135#32 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06187 | 32 | [14] A. Farahani, E. Nazemi, G. Cabri, and N. Capodieci, âEnabling au- tonomic computing support for the jade agent platform,â Scalable Computing: Practice and Experience, vol. 18, no. 1, pp. 91â103, 2017. [15] J. Andersson, M. Caporuscio, M. DâAngelo, and A. Napolitano, âAr- chitecting decentralized control in large-scale self-adaptive systems,â Computing, pp. 1â34, 2023.
[16] P. Dehraj and A. Sharma, âA review on architecture and models for autonomic software systems,â The Journal of Supercomputing, vol. 77, pp. 388â417, 2021.
[17] R. Ara´ujo and R. Holmes, âLightweight self-adaptive conï¬guration using machine learning,â in Proceedings of the 31st Annual International Conference on Computer Science and Software Engineering, 2021, pp. 133â142.
[18] F. Bellifemine, G. Caire, T. Trucco, G. Rimassa, and R. Mungenast, âJade administratorâs guide,â TILab (February 2006), 2003. | 2307.06187#32 | Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems | In autonomic computing, self-adaptation has been proposed as a fundamental
paradigm to manage the complexity of multiagent systems (MASs). This achieved
by extending a system with support to monitor and adapt itself to achieve
specific concerns of interest. Communication in these systems is key given that
in scenarios involving agent interaction, it enhances cooperation and reduces
coordination challenges by enabling direct, clear information exchange.
However, improving the expressiveness of the interaction communication with
MASs is not without challenges. In this sense, the interplay between
self-adaptive systems and effective communication is crucial for future MAS
advancements. In this paper, we propose the integration of large language
models (LLMs) such as GPT-based technologies into multiagent systems. We anchor
our methodology on the MAPE-K model, which is renowned for its robust support
in monitoring, analyzing, planning, and executing system adaptations in
response to dynamic environments. We also present a practical illustration of
the proposed approach, in which we implement and assess a basic MAS-based
application. The approach significantly advances the state-of-the-art of
self-adaptive systems by proposing a new paradigm for MAS self-adaptation of
autonomous systems based on LLM capabilities. | http://arxiv.org/pdf/2307.06187 | Nathalia Nascimento, Paulo Alencar, Donald Cowan | cs.MA, cs.AI, cs.CL | 6 pages, submitted | null | cs.MA | 20230712 | 20230712 | [
{
"id": "2210.11610"
},
{
"id": "2206.07682"
},
{
"id": "2303.18223"
}
] |
2307.06281 | 32 | We then get the predictionâs option (e.g. A) from GPTâs response. For most questions, GPT-3.5 is capable of returning a single character (e.g., A, B, C) as the answer. For each input, we compare the modelâs label prediction (after GPTâs similarity readout) with the actual ground truth label. If the prediction matches the label, the test sample is considered correct.
# 4.3 ChatGPT as the Judge: A Feasibility Analysis
Table 2: Success rate of each step in our choice extraction.
We first conduct pilot experiments to study the effectiveness of ChatGPT as the judge. To keep the setting simple, for MMBench , we sample a subset with â¼1000 samples and use the vanilla single- pass evaluation strategy to evaluate 8 selected VLMs. We also include traditional VQA benchmarks in the study.
_ | 2307.06281#32 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 32 | 8
Preprint
quality. Results are presented in Figure 4. This reveals some interesting emergent phenomena when finetuning large language models. We detail the observed phenomena below.
Phenomenon 1 Non-monotonicity exists in language model performance. As we increase the training data size, language model performance first becomes better then gets worse. When data size increases to a certain level, performance becomes better again. Based on Figure 4, we observe that the performance first improves as the training data size grows. Then, after the data size grows to around 10,000, loss begins to increase, meaning that language model performance worsens. Finally, as data size continues to grow, language modelâs performance improves. This phenomenon is similar to the double descent phenomenon (Nakkiran et al., 2021) that non-monotonicity exists with varying numbers of training samples. In our experiment, we observe that this phenomenon not only exists in vanilla language model training but also in large generative language model finetuning. | 2307.06290#32 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 33 | The results illustrate that the key to ensuring the task planâs executability was iterative replanning. Both LLM-As-Planner and LLM+P exhibited poor executability, whereas SayPlan achieved near- perfect executability as a result of iterative replanning, which ensured that the generated plans were grounded to adhere to the constraints and predicated imposed by the environment. Detailed task plans and errors encountered are provided in Appendix G. We summarise these errors in Table 3 (right) which shows that plans generated with LLM+P and LLM-As-Planner entailed various types of errors limiting their executability. LLM+P mitigated navigational path planning errors as a result of the classical path planner however still suffered from errors pertaining to the manipulation of the environment - missing actions or incorrect actions which violate environment predicates. SayPlan mitigated these errors via iterative replanning, however in 6.67% of tasks, it failed to correct for some hallucinated nodes. While we believe these errors could be eventually corrected via iterative replanning, we limited the number of replanning steps to 5 throughout all experiments. We provide an illustration of the real-world execution of a generated plan using SayPlan on a mobile manipulator robot coupled with a vision-guided motion controller [44, 45] in Appendix I.
# 6 Limitations | 2307.06135#33 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 33 | _
Model Name Step-1 Step-2 LLaMA-Adapter [13] 1.0% 100.0% OpenFlamingo [3] 98.6% 100.0% VisualGLM [9] MiniGPT-4 [46] LLaVA [27] Otter-I [23] InstructBLIP [8] mPLUG-Owl [40] 14.9% 100.0% 71.6% 100.0% 9.9% 100.0% 100.0% 100.0% 91.2% 100.0% 42.6% 100.0%
Instruction following capabilities of different VLMs vary a lot. ChatGPT-involved choice extraction plays a vital role in MMBench evaluation, especially for VLMs with poor instruction following capabilities. In Table 2, we demonstrate the success rate of step 1 and step 2 of our evaluation strategy. Step 1 success rate (matching choices with VLM predictions) is directly related to the VLMâs instruction-following capability. Table 2 shows that the step-1 suc- cess rates of different VLMs vary a lot, covering a wide range from 1.0% to 100.0%. | 2307.06281#33 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 33 | Phenomenon 2 Balance point exists between randomly selected and quality selected data. As data size grows, data quality becomes a less important factor for model performance. Given Figure 4, we find out that when data size grows to a certain point, the performance curve of selected data will always intersect with the random one. Besides, the distance between the two decreases as the data size increases. This phenomenon indicates that data quality measure can help improve model performance at first. However, data quality becomes less important when data size grows to a certain level.
5.2 ROBUSTNESS | 2307.06290#33 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 34 | SayPlan is notably constrained by the limitations inherent in current large language models (LLMs), including biases and inaccuracies, affecting the validity of its generated plans. More speciï¬cally, SayPlan is limited by the graph-based reasoning capabilities of the underlying LLM which fails at simple distance-based reasoning, node count-based reasoning and node negation. Future work could explore ï¬ne-tuning these models for these speciï¬c tasks or alternatively incorporate existing and more complex graph reasoning tools [46] to facilitate decision-making. Secondly, SayPlanâs current framework is constrained by the need for a pre-built 3D scene graph and assumes that ob- jects remain static post-map generation, signiï¬cantly restricting its adaptability to dynamic real- world environments. Future work could explore how online scene graph SLAM systems [15] could be integrated within the SayPlan framework to account for this. Additionally, the incorporation of open-vocabulary representations within the scene graph could yield a general scene representation as opposed to solely textual node descriptions. Lastly, a potential limitation of the current system lies in the scene graph simulator and its ability to capture the | 2307.06135#34 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 34 | However, with ChatGPT choice extractor equipped, the step-2 success rates of all VLMs reach nearly 100%, which enables a fair comparison of different VLMs on MMBench . Another point worth noting is, the instruction following capability and the overall multi-modality modeling capability is not necessarily correlated. OpenFlamingo [3] demonstrates the best instruction following capability among all VLMs, while also achieving one of the worst performance on MMBench (Table 5).
# Human v.s ChatGPT: alignment in choice extraction.
For VLM predictions that cannot be parsed with exact matching, we adopt ChatGPT as the choice extractor. To validate its efficacy, we sample a subset of MMBench , which contains 103 questions and 824 (103 Ã 8) question-answer pairs. We keep only the QA pairs that can not be parsed by the evaluation step 1, which yield 376 data samples. With the help of 6 volunteers, we perform manual choice extraction to these data samples2. | 2307.06281#34 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 34 | 5.2 ROBUSTNESS
To further explore the effectiveness of INSTRUCTMINING, we evaluate it across three different finetuning settings: (1) Different base models. We change the original base model LLAMA-2-7B into LLAMA-1-7B to test whether our method is scalable to other models. (2) Different model sizes. We change the original 7B model size into 13B to test whether our method is scalable to other model sizes. (3) Parameter efficient settings. LORA (Hu et al., 2021), a parameter efficient method, is widely used when finetuning a large language model to help save GPU memory usage. We also test our method with LORA settings to see whether INSTRUCTMINING is scalable to parameter efficient finetuning. Results are presented in Table 6. As the data shows, INSTRUCTMINING rule can be applied to various base models, model sizes and parameter efficient settings.
Base Model Model Size LoRA Sampling Method Loss(SELF-INSTRUCT) LOSS(MT-BENCH) LLAMA-2 13B â Selected Random 0.8748 0.8983 0.6531 0.6589 LLAMA-1 7B â Selected Random 1.013 1.056 0.798 0.844 LLAMA-2 7B â Selected Random 1.0698 1.0700 0.8624 0.8631 | 2307.06290#34 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 35 | representation as opposed to solely textual node descriptions. Lastly, a potential limitation of the current system lies in the scene graph simulator and its ability to capture the various planning failures within the environment. While this works well in the cases presented in this paper, for more complex tasks in- volving a diverse set of predicates and affordances, the incorporation of relevant feedback messages for each instance may become infeasible and forms an important avenue for future work in this area. | 2307.06135#35 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 35 | In Figure 5, we report the alignment rate (extracted choices are exactly the same) between ChatGPT and Human. Specifically, ChatGPT (GPT-3.5) achieves 87.0% alignment rate, while the more powerful GPT-4 achieves a slightly better 87.2%. We further conduct an ablation study to learn the effect of using various LLMs as the choice extractor. GPT-4 and ChatGPT take the lead among all LLMs. Claude achieves a very close alignment rate (86.4%) compared to ChatGPT. Existing
2The human annotations will be released.
8 | 2307.06281#35 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 35 | Table 6: Robustness test result.
# 6 RELATED WORK
Instruction tuning. Recent studies have explored instruction tuning as a method for fine-tuning LLMs, enhancing their ability to generalize to unseen instructions (Wei et al., 2021). Reinforcement learning from human feedback (RLHF) is popular method that aligns language models with human intent (Ouyang et al., 2022). To further improve instruction tuning, some work choosed to increase the size of the data (Honovich et al., 2022; Wang et al., 2022a). Besides, Zhou et al. (2023) demonstrated that utilizing a smaller volume of high-quality instruction data can still produce effective models.
Instruction evaluation. The field has experienced growth with the publication of numerous instruc- tion datasets (Taori et al., 2023; Köpf et al., 2023; Honovich et al., 2022). Chung et al. (2022) first combined multiple datasets to augment both the quantity and diversity of instruction data, achieving notable performance gains. Some newest works suggest that enhancing instruction diversity can also significantly improve instruction tuning performance (Iyer et al., 2023; Wang et al., 2023; 2022b; Longpre et al., 2023). Meanwhile, Gunasekar et al. (2023) have demonstrated that an increased
9
Preprint | 2307.06290#35 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 36 | # 7 Conclusion
SayPlan is a natural language-driven planning framework for robotics that integrates hierarchical 3D scene graphs and LLMs to plan across large-scale environments spanning multiple ï¬oors and rooms. We ensure the scalability of our approach by exploiting the hierarchical nature of 3D scene graphs and the semantic reasoning capabilities of LLMs to enable the agent to explore the scene graph from the highest level within the hierarchy, resulting in a signiï¬cant reduction in the initial tokens required to capture larger environments. Once explored, the LLM generates task plans for a mobile manipulator robot, and a scene graph simulator ensures that the plan is feasible and grounded to the environment via iterative replanning. The framework surpasses existing techniques in producing correct, executable plans, which a robot can then follow. Finally, we successfully translate validated plans to a real-world mobile manipulator agent which operates across multiple rooms, assets and objects in a large ofï¬ce environment. SayPlan represents a step forward for general-purpose service robotics that can operate in our homes, hospitals and workplaces, laying the groundwork for future research in this ï¬eld.
8
# Acknowledgments | 2307.06135#36 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 36 | Ambiguous Samples Human correct match, ChatGPT wrong match @, Based on the image, what makes Quick Stop Groceries stand out from other (oz) D2. what is the relationship between the peopl in the theâ store offers a wide variety of groceries and household items &. The store has «large selection of magazines in addition to groceries ©. The store provides exclusive discounts and promotions D. The store focuses on organic and locally sourced products Nlava~rle: Quick Seop Groceries stands out from other grocery stores due to its wide variety of groceries and household items, as well as the large selection of magazines available. The store has a large number of items displayed, which makes ie an attractive option for customers looking for a diverse range of products. Additionally the store's focus on organic and leealy soureed products, as well as its enyphasis on providing exclusive discounts and promotions, can also contribute fo its unique appeal Human Matching: B. GPT Matching: A image? a couple 8 professional 6 Friends D. commercial llamacadapter: The relationship between the people in the image is that of a couple. The woman is standing next £0 the man, who ie holding & wierophone and a turntable | 2307.06281#36 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 36 | 9
Preprint
proportion of high-quality data can yield enhanced performance. Other works focused on estimating the quality of prompts, Chen et al. (2023) use prompting to LLM API as an auto-grader of data quality, Gonen et al. (2022) use perplexity for prompt selection.
# 7 CONCLUSION
In this paper, we propose high-quality example selection method. Experiments have been conducted to estimate this ruleâs parameter and prove that our evaluation rule is valid and scalable to other finetuning settings. Besides, we present our observation of the double descent phenomenon in langauge model finetuning. Based on this finding, we further applied BLENDSEARCH to search for the best subset. Results show that INSTRUCTMINING rule is valid and scalable.
# REFERENCES
Ã
ke Björck. Least squares methods. Handbook of numerical analysis, 1:465â652, 1990.
Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, and Hongxia Jin. Alpagasus: Training a better alpaca with fewer data, 2023. | 2307.06290#36 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 37 | 8
# Acknowledgments
The authors would like to thank Ben Burgess-Limerick for assistance with the robot hardware setup, Nishant Rana for creating the illustrations and Norman Di Palo and Michael Milford for insight- ful discussions and feedback towards this manuscript. The authors also acknowledge the ongoing support from the QUT Centre for Robotics. This work was partially supported by the Australian Government through the Australian Research Councilâs Discovery Projects funding scheme (Project DP220102398) and by an Amazon Research Award to Niko S¨underhauf.
# References
[1] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. E. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. J. Lowe. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022. | 2307.06135#37 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 37 | The relationship between the people in the image is that of a couple. The woman is standing next £0 the man, who ie holding & wierophone and a turntable âThey are likely @ DJ or a musician, and they are posing for a photo together Human Matching: B GPT Matching: A GPT correct match, Human wrong match @: What's the function of @: Based on the map, what was true about the Silk Road @% Does Daucus the demonstrated object? âaround the year 4300 CE? Be carota have cells âAcused as decorations y âA. The Sie Road included both land and sea routes. that have a nucleus? 8, stuffed toy in the Form The Silk Road was made up of only land routes. i Ano of a bear j Days E. The Silke Road connected East Asia and Americas by sea. Byes © collectibles llava~7b: The reap shows that the Sile Road around the year plug owl-7b: Yes, 11500 CE was primarily a land route, connecting East Asia Daueus carota has land the Americas. The Sil Road was not a sea route, as it cells that have a Visualgim-eb: The did not include maritime routes connecting East Asia and rucleus. | 2307.06281#37 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 37 | Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
Wei Dong, Charikar Moses, and Kai Li. Efficient k-nearest neighbor graph construction for generic similarity measures. In Proceedings of the 20th international conference on World wide web, pp. 577â586, 2011. | 2307.06290#37 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 38 | [2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Had- sell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc., 2020.
[3] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. | 2307.06135#38 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 38 | The Sil Road was not a sea route, as it cells that have a Visualgim-eb: The did not include maritime routes connecting East Asia and rucleus. demonstrated object, @ the Americas, The map also indicates that the Sile Road was brown teddy bear, s used / 12 major route for trade and eultural exchange, connecting 1s decorations. Various regions and cultures across Asia and Europe. Human: A Human: 8 GPT: | F Human: CGPT.B apres D. represent characters From movies | 2307.06281#38 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 38 | Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Gold- ing, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL https://doi.org/10.5281/zenodo.5371628.
Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, and Luke Zettlemoyer. Demystifying prompts in language models via perplexity estimation, 2022.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are all you need. arXiv preprint arXiv:2306.11644, 2023.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020. | 2307.06290#38 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 39 | [3] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
[4] A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian, et al. Do As I Can, Not As I Say: Grounding language in robotic affordances. In Conference on Robot Learning, pages 287â318. PMLR, 2023.
[5] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi. Chatgpt empowered long- step robot control in various environments: A case application, 2023. | 2307.06135#39 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 39 | Figure 6: Visualization of Case-II data samples.
achieves poor performance on the choice Vicuna-7B to Vicuna-33B) only leads extractor in our evaluation for a good Closed-Source LLMs Open-Source LLMs x00 ** 4on 20% Gay Gy â% "Png ig, Ste, 6 Se "ss
open-source LLMs adapted from LLaMA [13] and GLM [9] achieves poor performance on the choice matching task. Further scaling the architecture (e.g. from Vicuna-7B to Vicuna-33B) only leads to limited improvements. We adopt ChatGPT as the choice extractor in our evaluation for a good performance-cost trade-off.
Mis-aligned cases analysis. Due to the newly introduced pseudo choice âXâ, sometimes humans and LLMs can make different decisions when doing choice extraction due to different matching thresholds. For example, agent A may match the prediction P with a given choice C since C is the most similar choice to P; while agent A can output choice X since he / she thinks P is not similar enough to any choice. Based on that observation, we divide the 50 ChatGPT mis-aligned cases into two categories:
Case I. Human or ChatGPT fails to match the prediction with given choices and outputs an âXâ. 70% misaligned samples belong to that case. | 2307.06281#39 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 39 | Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor, 2022.
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.
Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian OâHoro, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Ves Stoyanov. Opt-iml: Scaling language model instruction meta learning through the lens of generalization, 2023.
10
Preprint | 2307.06290#39 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 40 | [6] D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, W. Huang, Y. Chebotar, P. Sermanet, D. Duckworth, S. Levine, V. Vanhoucke, K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. Florence. Palm-E: An embodied multimodal language model, 2023.
[7] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y. Su. LLM-Planner: Few-shot grounded planning for embodied agents with large language models. arXiv preprint arXiv:2212.04088, 2022.
[8] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022. | 2307.06135#40 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 40 | Case I. Human or ChatGPT fails to match the prediction with given choices and outputs an âXâ. 70% misaligned samples belong to that case.
Case II. Human and ChatGPT successfully match the prediction with given choices, but the matching results are different. 30% misaligned samples belong to that case.
Figure 5: The alignment rate between Human and different LLMs in choice extraction.
In the two cases, I means the judgement of human and ChatGPT are less aligned (may due to different evaluation standards), while II means the judgement of human and ChatGPT is completely different. We manually investigate 15 samples in Case-II, and find that: 1. In 7 samples, ChatGPT did the right match while human did the wrong one; 2. In 6 samples, the modelâs prediction is ambiguous and related to multiple choices; 3. In 2 samples, human did the right match while ChatGPT did the wrong one. The results support that ChatGPT can have strong capability in choice matching, even when compared with human annotators. We visualize Case-II samples in Figure 6. | 2307.06281#40 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 40 | 10
Preprint
Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, Shahul ES, Sameer Suri, David Glushkov, Arnav Dantuluri, Andrew Maguire, Christoph Schuhmann, Huu Nguyen, and Alexander Mattick. Openassistant conversations â democratizing large language model alignment, 2023.
Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". Openorca: An open dataset of gpt augmented flan reasoning traces. https://https:// huggingface.co/Open-Orca/OpenOrca, 2023.
Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. The flan collection: Designing data and methods for effective instruction tuning, 2023. | 2307.06290#40 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 41 | [9] B. Liu, Y. Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone. LLM+P: Empowering large language models with optimal planning proï¬ciency. arXiv preprint arXiv:2304.11477, 2023.
[10] T. Silver, V. Hariprasad, R. S. Shuttleworth, N. Kumar, T. Lozano-P´erez, and L. P. Kaelbling. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
[11] I. Armeni, Z.-Y. He, J. Gwak, A. R. Zamir, M. Fischer, J. Malik, and S. Savarese. 3D In Proceedings of scene graph: A structure for uniï¬ed semantics, 3D space, and camera. the IEEE/CVF international conference on computer vision, pages 5664â5673, 2019.
[12] U.-H. Kim, J.-M. Park, T.-J. Song, and J.-H. Kim. 3-D scene graph: A sparse and semantic rep- resentation of physical environments for intelligent agents. IEEE transactions on cybernetics, 50(12):4921â4933, 2019.
9 | 2307.06135#41 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 41 | ChatGPT-based evaluation for existing multi-modal tasks. To demonstrate ChatGPT is a general evaluator, we also validate our ChatGPT-based evaluation paradigm on existing multi-modality tasks, including GQA [20], OK-VQA [29], and Text-VQA [36]. Given the ground-truth answer, we use GPT3.5 to score the VLMâs prediction3. For each benchmark, we randomly select 1000 testing samples and evaluate with exact match (the traditional paradigm) and GPT-based match, respectively, and list the results in Table 3. Basically, GPT-based evaluation demonstrates the same trend compared to the exact-match accuracy on all tasks. On GQA, two algorithms demonstrate very close performance under GPT-based evaluation. In further investigation, we find the reason is that GPT succeed in matching slightly different answers (compared to GT) generated by MiniGPT-4, while exact matching fails (examples in Table 7).
3The score will be an integer in [1, 2, 3, 4, 5]. 1 means completely wrong, while 5 means completely correct. We provide the prompt used for marking in Appendix.
9
â
to...
| | 2307.06281#41 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 41 | Philip M McCarthy and Scott Jarvis. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behavior research methods, 42(2):381â392, 2010.
Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021(12):124003, 2021.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. | 2307.06290#41 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 42 | 9
[13] A. Rosinol, A. Violette, M. Abate, N. Hughes, Y. Chang, J. Shi, A. Gupta, and L. Carlone. Kimera: From slam to spatial perception with 3D dynamic scene graphs. The International Journal of Robotics Research, 40(12-14):1510â1546, 2021.
[14] P. Gay, J. Stuart, and A. Del Bue. Visual graphs from motion (vgfm): Scene understanding with object geometry reasoning. In Computer VisionâACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2â6, 2018, Revised Selected Papers, Part III 14, pages 330â346. Springer, 2019.
[15] N. Hughes, Y. Chang, and L. Carlone. Hydra: A real-time spatial perception engine for 3D scene graph construction and optimization. Robotics: Science and Systems XIV, 2022.
[16] C. Agia, K. M. Jatavallabhula, M. Khodeir, O. Miksik, V. Vineet, M. Mukadam, L. Paull, and F. Shkurti. Taskography: Evaluating robot task planning over large 3D scene graphs. In Conference on Robot Learning, pages 46â58. PMLR, 2022. | 2307.06135#42 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 42 | 9
â
to...
|
Table 3: GPT-based marking v.s Exact Matching. A preliminary study on VQA benchmarks. Accuracy is the success rate of answers being exactly matched with the groundtruth. For each sample, GPT score is an integer n â [1, 5], indicating the similarity between answer and groundtruth. We report the average GPT score for testing samples.
Dataset GQA [20] OK-VQA [29] Text-VQA [36] Model Flamingo MiniGPT-4 Flamingo MiniGPT-4 Flamingo MiniGPT-4 Accuracy 33.6% 22.4% 42.6% 21.9% 22.9% 9.8% Average GPT score 2.75 2.74 2.79 1.97 1.92 1.54
Table 4: CircularEval v.s VanillaEval. We compare CircularEval and VanillaEval on MMBench dev split and present the overall Top-1 accuracy of all VLMs. *Kosmos-2 obtains the result by comparing the perplexity of the combinations of the question and different choices, itâs accuracy is consistent given the same questions and choice sets, despite the evaluation strategy. | 2307.06281#42 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 42 | Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908. 10084.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. | 2307.06290#42 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 43 | [17] N. Shinn, F. Cassano, B. Labash, A. Gopinath, K. Narasimhan, and S. Yao. Reï¬exion: Lan- guage agents with verbal reinforcement learning, 2023.
[18] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[19] E. W. Dijkstra. A note on two problems in connexion with graphs. In Edsger Wybe Dijkstra: His Life, Work, and Legacy, pages 287â290. 2022.
[20] D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, and D. Wilkins. PDDL-the planning domain deï¬nition language. 1998.
[21] M. Fox and D. Long. PDDL2. 1: An extension to PDDL for expressing temporal planning domains. Journal of artiï¬cial intelligence research, 20:61â124, 2003. | 2307.06135#43 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 43 | Eval \VLM OpenFlamingo LLaMA-Adapter MiniGPT-4 MMGPT InstructBLIP VisualGLM LLaVA mPLUG-Owl VanillaEval 34.6% 62.6% 50.3% 49.1% 61.3% 60.4% 56.1% 67.3% CircularEval 4.6% 41.2% 24.3% 15.3% 36.0% 38.1% 38.7% 49.4% â -30.0% -21.4% -26.0% -33.8% -25.3% -22.3% -17.4% -17.9% Eval \VLM OpenFlamingo v2 µ-G2PT MiniGPT-4-13B Otter-I InstructBLIP-13B PandaGPT Kosmos-2* Shikra VanillaEval 40.0% 61.3% 61.3% 68.8% 64.4% 55.2% 58.2% 69.9% CircularEval 6.6% 43.2% 42.3% 51.4% 44.0% 33.5% 58.2% 58.8% â -33.4% -18.1% -19.0% -17.4% -20.4% | 2307.06281#43 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 43 | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
Chi Wang, Qingyun Wu, Silu Huang, and Amin Saied. Economical hyperparameter optimization with blended search strategy. In ICLRâ21, 2021a.
Chi Wang, Qingyun Wu, Markus Weimer, and Erkang Zhu. Flaml: A fast and lightweight automl library, 2021b.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.
11
Preprint | 2307.06290#43 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 44 | [22] P. Haslum, N. Lipovetzky, D. Magazzeni, and C. Muise. An introduction to the planning do- main deï¬nition language. Synthesis Lectures on Artiï¬cial Intelligence and Machine Learning, 13(2):1â187, 2019.
[23] M. Gelfond and Y. Kahl. Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach. Cambridge University Press, 2014.
[24] S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. Understanding natural language commands for robotic navigation and mobile manipulation. Proceedings of the AAAI Conference on Artiï¬cial Intelligence, 2011.
[25] J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y. Jiang, H. Yedidsion, J. W. Hart, P. Stone, and R. J. Mooney. Jointly improving parsing and perception for natural language commands through human-robot dialog. J. Artif. Intell. Res., 67:327â374, 2020. | 2307.06135#44 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06290 | 44 | 11
Preprint
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Maitreya Patel, Kuntal Kumar Pal, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Shailaja Keyur Sampat, Savan Doshi, Siddhartha Mishra, Sujan Reddy, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks, 2022b. | 2307.06290#44 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 45 | [26] H. Kautz and B. Selman. Pushing the envelope: Planning, propositional logic, and stochastic search. In Proceedings of the national conference on artiï¬cial intelligence, pages 1194â1201, 1996.
[27] B. Bonet and H. Geffner. Planning as heuristic search. Artiï¬cial Intelligence, 129(1-2):5â33, 2001.
[28] M. Vallati, L. Chrpa, M. Grze´s, T. L. McCluskey, M. Roberts, S. Sanner, et al. The 2014 international planning competition: Progress and trends. AI Magazine, 36(3):90â98, 2015.
[29] R. Chitnis, T. Silver, B. Kim, L. Kaelbling, and T. Lozano-Perez. CAMPs: Learning Context- Speciï¬c Abstractions for Efï¬cient Planning in Factored MDPs. In Conference on Robot Learn- ing, pages 64â79. PMLR, 2021. | 2307.06135#45 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 45 | # 5 Evaluation Results
# 5.1 VLM Inference Setting
Currently, we adopt the traditional zero-shot setting for VLM inference, primarily due to the limited compatibility of existing VLMs with few-shot evaluation settings. However, we have noticed the great potential of few-shot evaluation protocols in LLMs [19]. In future work, we specifically plan to construct a subset of data samples designated for few-shot evaluation. We anticipate that few-shot evaluation will evolve into a standard assessment strategy, akin to the approach employed in LLMs.
# 5.2 Main Results
We select 14 different multi-modality models and benchmark them on MMBench . The models we have selected cover a broad spectrum of strategies and architectures, effectively illustrating the current state-of-the-art in multimodal understanding. To facilitate a fair comparison, we mainly examine the "light" versions of all multimodal models â those with a total amount of parameters below 10B â when multiple variants exist. For further reference, we also evaluate larger variants (e.g. 13B) of some selected models, and report their performance. Please refer to Table 15 for detailed information regarding the architecture and the total parameters of these models. | 2307.06281#45 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 45 | Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. How far can camels go? exploring the state of instruction tuning on open resources, 2023.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. | 2307.06290#45 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 46 | [30] T. Silver, R. Chitnis, A. Curtis, J. B. Tenenbaum, T. Lozano-P´erez, and L. P. Kaelbling. Plan- ning with learned object importance in large problem instances using graph neural networks. In Proceedings of the AAAI conference on artiï¬cial intelligence, volume 35, pages 11962â11971, 2021.
10
[31] F. Ceola, E. Tosello, L. Tagliapietra, G. Nicola, and S. Ghidoni. Robot task planning via deep reinforcement learning: a tabletop object sorting application. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pages 486â492, 2019. doi:10.1109/ SMC.2019.8914278.
[32] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. | 2307.06135#46 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 46 | Before delving deeper into concrete evaluation results, we first compare our CircularEval (infer a question multiple passes, consistency as a must) with VanillaEval (infer a question only once). In Table 4, we present the results with two evaluation strategies on MMBench dev split. For most VLMs, switching from VanillaEval to CircularEval leads to a significant drop in model accuracy. Besides, different conclusions can be drawn from two evaluation results. An an example, InstructBLIP outperforms LLaVA in VanillaEval, but with CircularEval we have the opposite conclusion. In following experiments, We adopt CircularEval as our default evaluation strategy, which is a more reasonable and well-defined evaluation paradigm.
We exhaustively evaluate the eight models on the existing 20 leaf abilities of MMBench . In Table 5 and Table 6, we report the modelsâ overall performance and the performance in six L-2 abilities, namely Logical Reasoning (LR), Attribute Reasoning (AR), Relation Reasoning (RR), Fine-grained Perception (Cross Instance) (FP-C), Fine-grained Perception (Single Instance) (FP-S), and Coarse
10 | 2307.06281#46 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 46 | Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. Towards a unified multi-dimensional evaluator for text generation, 2022.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023.
12
Preprint
A SEARCH PROCEDURE
Rule Estimation Indicator Inference S fuse 3 see 2 tne 5 InstructMining Rule sample CS meatal log L(Mye, Devat) * Lo + F{ly (D), 1p), In(D)} Candidate Datasets Data Selection Indicator _, wy pamâ aR ââc ae Inference Dataset Rated Dataset Selected Dataset | 2307.06290#46 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 47 | [33] A. Zeng, A. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. Ryoo, V. Sind- hwani, J. Lee, V. Vanhoucke, et al. Socratic models: Composing zero-shot multimodal reason- ing with language. arXiv preprint arXiv:2204.00598, 2022.
[34] Y. Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh. Translating natural language to planning goals with large-language models. arXiv preprint arXiv:2302.05128, 2023.
[35] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. | 2307.06135#47 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 47 | 10
Table 5: CircularEval results on MMBench dev set (L-2 abilities). We adopt the following abbre- viations: LR for Logical Reasoning; AR for Attribute Reasoning; RR for Relation Reasoning; FP-C for Fine-grained Perception (Cross Instance); FP-S for Fine-grained Perception (Single Instance); CP for Coarse Perception. Methods above the dash line have parameter sizes ⤠10B; methods below the dash line have parameter sizes > 10B. Kosmos-2â obtains the results using perplexity (PPL). | 2307.06281#47 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 47 | Figure 5: Our data selection pipeline. Rule estimation: We first select several candidate datasets. Then, we fuse and sample from them to form datasets of different quality levels. For each dataset, we finetune a language model and evaluate the model on a shared evaluation set. We also calculate bag of indicator values on the dataset. Finally, we perform a linear regression analysis based on our curated experiment results to estimate the linear rule parameters. Data Selection: With the estimated INSTRUCTMINING rule, we first calculate the rule values to assess each example in the dataset. Then, we rank the dataset according to quality scores. We apply FLAML to do BLENDSEARCH. Finally, we use the searched dataset to finetune a language model.
# B INDICATOR DESCRIPTIVE ANALYSIS
To provide more details on natural language indicators, we present further descriptive analysis results on these indicators. We calculate the indicator values across the 129 sampled subsets. Figure 6 presents indicator distribution graphs.
In addition, to make sure that statistical regression is valid in this paper, we perform Kolmogorov- Smirnov(KS) test on every indicator. Test results are provided in Table 7. According to the results, the indicators we use in this paper follow normal distribution. | 2307.06290#47 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 48 | [36] T. Schick, J. Dwivedi-Yu, R. Dess`ı, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
[37] R. Liu, J. Wei, S. S. Gu, T.-Y. Wu, S. Vosoughi, C. Cui, D. Zhou, and A. M. Dai. Mindâs eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359, 2022.
[38] M. Skreta, N. Yoshikawa, S. Arellano-Rubach, Z. Ji, L. B. Kristensen, K. Darvish, A. Aspuru- Guzik, F. Shkurti, and A. Garg. Errors are useful prompts: Instruction guided task program- ming with veriï¬er-assisted iterative prompting. arXiv preprint arXiv:2303.14100, 2023. | 2307.06135#48 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 48 | VLM Overall LR AR RR FP-S FP-C OpenFlamingo [3] 4.6% 6.7% 8.0% 0.0% 6.7% 2.8% OpenFlamingo v2 [3] 6.6% 4.2% 15.4% 0.9% 8.1% 1.4% MMGPT [14] 15.3% 2.5% 26.4% 13.0% 14.1% 3.4% MiniGPT-4 [46] 24.3% 7.5% 31.3% 4.3% 30.3% 9.0% InstructBLIP [8] 36.0% 14.2% 46.3% 22.6% 37.0% 21.4% VisualGLM [9] 38.1% 10.8% 44.3% 35.7% 43.8% 23.4% LLaVA [27] 38.7% 16.7% 48.3% 30.4% 45.5% 32.4% LLaMA-Adapter [42] 41.2% 11.7% 35.3% 29.6% 47.5% 38.6% µ-G2PT 43.2% 13.3% 38.8% 40.9% 46.5% 38.6% mPLUG-Owl [40] | 2307.06281#48 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 48 | input_length output_length understandability naturalness 0.020 0.006 12.5 10 0.005 0.015 0.008 00 es a a S75 6 Bo.o10 § 0.003 E : 0,002 5.0 4 0.005 0.001 25 2 0.000 4 0.000 4 oo! od ¢ 5010050 yoo 200 300 400 500 065 0.70 0.75 0.80 0.85 090 os 07 0088 input. length output length understandsbilty naturainess coherence pythia-reward first_round_mtid knn_6 0.6 20 . 0.08 iâ ps pos 3008 > 2 2 2 2 20 810 ge? 8 0.08 g 0.2 5 oa 0.02 20 0 oo 0.00 od oa 090085 ao 10203 4 so 60708 Te 104 105 1.08 110 coherence pythia reward first. round, kan.
Figure 6: Distribution graph of natural language indicators.
13
Preprint | 2307.06290#48 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 49 | [39] Z. Ravichandran, L. Peng, N. Hughes, J. D. Grifï¬th, and L. Carlone. Hierarchical represen- tations and explicit memory: Learning effective navigation policies on 3D scene graphs using graph neural networks. In 2022 International Conference on Robotics and Automation (ICRA), pages 9272â9279. IEEE, 2022.
[40] A. Kurenkov, R. Mart´ın-Mart´ın, J. Ichnowski, K. Goldberg, and S. Savarese. Semantic and ge- ometric modeling with neural message passing in 3D scene graphs for hierarchical mechanical search. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 11227â11233. IEEE, 2021.
[41] S. Garg, N. S¨underhauf, F. Dayoub, D. Morrison, A. Cosgun, G. Carneiro, Q. Wu, T.-J. Chin, I. Reid, S. Gould, et al. Semantics for robotic mapping, perception and interaction: A survey. Foundations and Trends® in Robotics, 8(1â2):1â224, 2020. | 2307.06135#49 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 49 | µ-G2PT 43.2% 13.3% 38.8% 40.9% 46.5% 38.6% mPLUG-Owl [40] 49.4% 16.7% 53.2% 47.8% 50.2% 40.7% Otter-I [23, 22] 51.4% 32.5% 56.7% 53.9% 46.8% 38.6% Shikra [5] Kosmos-2â [33] 58.8% 59.2% 25.8% 46.7% 56.7% 55.7% 58.3% 43.5% 57.2% 64.3% 57.9% 49.0% PandaGPT [37] 33.5% 10.0% 38.8% 23.5% 27.9% 35.2% MiniGPT-4-13B [46] 42.3% 20.8% 50.7% 30.4% 49.5% 26.2% InstructBLIP-13B [8] 44.0% 19.1% 54.2% 34.8% 47.8% 24.8% CP 2.0% 5.0% 20.8% 35.6% 49.0% 47.3% 40.6% 56.4% 58.1% | 2307.06281#49 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 49 | Figure 6: Distribution graph of natural language indicators.
13
Preprint
Indicator input_length output_length understandability naturalness coherence pythia-reward mtld knn_6 perplexity Statistics 1.0 1.0 0.765 0.744 0.814 0.657 1.0 0.85 0.997 p Value 0.0âââ 0.0âââ 1.25e-50âââ 3.03e-47âââ 7.89e-60âââ 3.17e-35âââ 0.0âââ 7.77e-68âââ 3.34e-202âââ
Table 7: KS test results for all variables in linear regression. Smaller p value indicates that the variable is highly possible to follow normal distribution. â refers to p ⤠0.10, ââ refers to p ⤠0.05, and âââ refers to p ⤠0.01. | 2307.06290#49 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 50 | [42] A. A. Hagberg, D. A. Schult, and P. J. Swart. Exploring network structure, dynamics, and function using networkx. In G. Varoquaux, T. Vaught, and J. Millman, editors, Proceedings of the 7th Python in Science Conference, pages 11 â 15, Pasadena, CA USA, 2008.
[43] M. Skreta, N. Yoshikawa, S. Arellano-Rubach, Z. Ji, L. B. Kristensen, K. Darvish, A. Aspuru- Guzik, F. Shkurti, and A. Garg. Errors are useful prompts: Instruction guided task pro- gramming with veriï¬er-assisted iterative prompting. ArXiv, abs/2303.14100, 2023. URL https://api.semanticscholar.org/CorpusID:257757298.
[44] J. Haviland, N. S¨underhauf, and P. Corke. A holistic approach to reactive mobile manipulation. IEEE Robotics and Automation Letters, 7(2):3122â3129, 2022. | 2307.06135#50 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06135 | 51 | [45] P. Corke and J. Haviland. Not your grandmotherâs toolboxâthe robotics toolbox reinvented for python. In 2021 IEEE international conference on robotics and automation (ICRA), pages 11357â11363. IEEE, 2021.
[46] J. Zhang. Graph-toolformer: To empower LLMs with graph reasoning ability via prompt augmented by chatgpt. arXiv preprint arXiv:2304.11116, 2023.
11
[47] S. Haddadin, S. Parusel, L. Johannsmeier, S. Golz, S. Gabl, F. Walch, M. Sabaghian, C. J¨ahne, L. Hausperger, and S. Haddadin. The franka emika robot: A reference platform for robotics research and education. IEEE Robotics and Automation Magazine, 29(2):46â64, 2022. doi: 10.1109/MRA.2021.3138382.
[48] Omron. Omron LD / HD Series. URL https://www.ia.omron.com/products/ family/3664/dimension.html. | 2307.06135#51 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 51 | Perception (CP). These results offer valuable insights into the individual strengths and limitations of each model in different aspects of multi-modality understanding.
As demonstrated in Table 6 and Table 5, Shikra and Kosmos-2 yield superior results, significantly outperforming other models in nearly all L-2 abilities. Second to the two models, mPLUG-Owl, Otter-I, and LLaMA-Adapter4 also exhibit noteworthy performances, and are comparable to each other. After that, four models (Otter-I, InstructBLIP, VisualGLM, LLaVA) are roughly at the same level of overall performance, but with strengths in different L2 abilities. Among all 14 models, OpenFlamingo, MMGPT, and MiniGPT-4 (7B) demonstrate lower overall performance compared to the other models. Furthermore, it is apparent that model scaling enhances performance metrics. This is evident as MiniGPT-4-13B outperforms MiniGPT-4 by an impressive 19.3%, and InstructBLIP-13B outperforms its predecessor, InstructBLIP, by a notable 9.2%. | 2307.06281#51 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
2307.06290 | 51 | Table 8: Empirical test of loss.
# C EMPIRICAL TEST OF INSTRUCTION QUALITY EVALUATION HYPOTHESIS
To investigate whether inference loss can serve as a suitable indicator of model capability and data quality, we conduct further finetuning experiments. We randomly select 1 1,000 examples from four datasets with different quality levels and finetune LLAMA-2-7B model on the selected datasets. We also finetune LLAMA-2-7B and LLAMA-2-13B models using 1,000 examples from ORCA-fused dataset. Results are provided in Table 8. As shown in the table, GPT-4 labeled datasets tend to yield lower loss on the two evaluation sets. Finetuned models with larger model size also yield lower loss on the evaluation sets. Hence, we suppose that evaluation loss can serve as a suitable indicator of model capability and data quality.
# D OTHER EMERGENT PHENOMENA
In this section, we present our analysis of other emergent phenomena in this paper. Except for regression test, we further conduct correlation test between indicator values and loss values. We plot regression analysis results in Figure 7. We detail other discovered phenomena below.
# Phenomenon 3 Perplexity is negatively correlated with data quality. | 2307.06290#51 | Instruction Mining: When Data Mining Meets Large Language Model Finetuning | Large language models (LLMs) are initially pretrained for broad capabilities
and then finetuned with instruction-following datasets to improve their
performance in interacting with humans. Despite advances in finetuning, a
standardized guideline for selecting high-quality datasets to optimize this
process remains elusive. In this paper, we first propose InstructMining, an
innovative method designed for automatically selecting premium
instruction-following data for finetuning LLMs. Specifically, InstructMining
utilizes natural language indicators as a measure of data quality, applying
them to evaluate unseen datasets. During experimentation, we discover that
double descent phenomenon exists in large language model finetuning. Based on
this observation, we further leverage BlendSearch to help find the best subset
among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show
that InstructMining-7B achieves state-of-the-art performance on two of the most
popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard. | http://arxiv.org/pdf/2307.06290 | Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun | cs.CL, cs.AI, cs.LG | 22 pages, 7 figures | null | cs.CL | 20230712 | 20231027 | [
{
"id": "1905.07830"
},
{
"id": "1803.05457"
},
{
"id": "2304.03277"
},
{
"id": "2306.11644"
},
{
"id": "2211.05100"
},
{
"id": "2109.01652"
},
{
"id": "2305.11206"
},
{
"id": "2210.11416"
},
{
"id": "2109.07958"
},
{
"id": "2009.03300"
},
{
"id": "2212.10560"
}
] |
2307.06135 | 52 | [48] Omron. Omron LD / HD Series. URL https://www.ia.omron.com/products/ family/3664/dimension.html.
[49] C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. Burchï¬el, and S. Song. Diffusion policy: In Proceedings of Robotics: Science and Visuomotor policy learning via action diffusion. Systems (RSS), 2023.
[50] K. Rana, A. Melnik, and N. S¨underhauf. Contrastive language, action, and state pre-training for robot learning, 2023.
[51] Q-transformer: Scalable ofï¬ine reinforcement learning via autoregressive q-functions. In 7th Annual Conference on Robot Learning, 2023.
[52] K. Rana, M. Xu, B. Tidd, M. Milford, and N. Suenderhauf. Residual skill policies: Learning an adaptable skill-based action space for reinforcement learning for robotics. In 6th Annual Conference on Robot Learning, 2022. URL https://openreview.net/forum?id= 0nb97NQypbK.
12
# A Implementation Details | 2307.06135#52 | SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning | Large language models (LLMs) have demonstrated impressive results in
developing generalist planning agents for diverse tasks. However, grounding
these plans in expansive, multi-floor, and multi-room environments presents a
significant challenge for robotics. We introduce SayPlan, a scalable approach
to LLM-based, large-scale task planning for robotics using 3D scene graph
(3DSG) representations. To ensure the scalability of our approach, we: (1)
exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic
search' for task-relevant subgraphs from a smaller, collapsed representation of
the full graph; (2) reduce the planning horizon for the LLM by integrating a
classical path planner and (3) introduce an 'iterative replanning' pipeline
that refines the initial plan using feedback from a scene graph simulator,
correcting infeasible actions and avoiding planning failures. We evaluate our
approach on two large-scale environments spanning up to 3 floors and 36 rooms
with 140 assets and objects and show that our approach is capable of grounding
large-scale, long-horizon task plans from abstract, and natural language
instruction for a mobile manipulator robot to execute. We provide real robot
video demonstrations on our project page https://sayplan.github.io. | http://arxiv.org/pdf/2307.06135 | Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf | cs.RO, cs.AI | Accepted for oral presentation at the Conference on Robot Learning
(CoRL), 2023. Project page can be found here: https://sayplan.github.io | null | cs.RO | 20230712 | 20230927 | [
{
"id": "2204.00598"
},
{
"id": "2210.05359"
},
{
"id": "2304.11477"
},
{
"id": "2302.04761"
},
{
"id": "2210.03629"
},
{
"id": "2207.05608"
},
{
"id": "2201.11903"
},
{
"id": "2303.14100"
},
{
"id": "2302.05128"
},
{
"id": "2302.12813"
},
{
"id": "2304.11116"
},
{
"id": "2212.04088"
}
] |
2307.06281 | 52 | The assessment on MMBench reveals that each multi-modality model exhibits unique strengths and weaknesses across different levels of abilities. This observation highlights the importance of carefully selecting and fine-tuning multi-modality models based on the specific requirements and objectives of a given task. Moreover, the identified limitations in some abilities suggest potential directions for further research and development in multi-modality AI systems.
For a more in-depth understanding, we provide a comprehensive analysis of the L3 abilities in Ta- ble 8, allowing readers to examine the very details of MMBench and gain deeper insights into the performance disparities among the evaluated models.
# 5.3 Analysis
With the comprehensive evaluation, we observe some interesting facts, which is expected to provide insights for future optimization.
Existing VLMs have limited instruction-following capabilities. For the sake of efficient evaluation, we guide each model to output only the label for each option, for instance, A, B, C, or D. However,
4According to the authors who submitted the evaluation results, the evaluated model is LLaMA-Adapter trained with LAION400M [35] for visual-language pre-training and LLaVAâI [27] for instruction tuning.
11 | 2307.06281#52 | MMBench: Is Your Multi-modal Model an All-around Player? | Large vision-language models have recently achieved remarkable progress,
exhibiting great perception and reasoning abilities concerning visual
information. However, how to effectively evaluate these large vision-language
models remains a major obstacle, hindering future model development.
Traditional benchmarks like VQAv2 or COCO Caption provide quantitative
performance measurements but suffer from a lack of fine-grained ability
assessment and non-robust evaluation metrics. Recent subjective benchmarks,
such as OwlEval, offer comprehensive evaluations of a model's abilities by
incorporating human labor, but they are not scalable and display significant
bias. In response to these challenges, we propose MMBench, a novel
multi-modality benchmark. MMBench methodically develops a comprehensive
evaluation pipeline, primarily comprised of two elements. The first element is
a meticulously curated dataset that surpasses existing similar benchmarks in
terms of the number and variety of evaluation questions and abilities. The
second element introduces a novel CircularEval strategy and incorporates the
use of ChatGPT. This implementation is designed to convert free-form
predictions into pre-defined choices, thereby facilitating a more robust
evaluation of the model's predictions. MMBench is a systematically-designed
objective benchmark for robustly evaluating the various abilities of
vision-language models. We hope MMBench will assist the research community in
better evaluating their models and encourage future advancements in this
domain. Project page: https://opencompass.org.cn/mmbench. | http://arxiv.org/pdf/2307.06281 | Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin | cs.CV, cs.CL | null | null | cs.CV | 20230712 | 20230813 | [
{
"id": "2302.13971"
},
{
"id": "2306.15195"
},
{
"id": "2305.03726"
},
{
"id": "2304.10592"
},
{
"id": "2106.09685"
},
{
"id": "2301.12597"
},
{
"id": "1504.00325"
},
{
"id": "2306.14824"
},
{
"id": "2305.16355"
},
{
"id": "2305.08322"
},
{
"id": "2111.02114"
},
{
"id": "2304.14178"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2304.08485"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.