doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.06290
52
# Phenomenon 3 Perplexity is negatively correlated with data quality. In general, a higher perplexity score corresponds to increased inference loss, suggesting a potential decline in data quality. Typically, elevated perplexity signifies that the data instance was infrequently encountered during pretraining. Such instances may fall outside the bounds of conventional natural language. # Phenomenon 4 Reward score is positively correlated with data quality. In this paper, we employ the oasst-pythia reward model to assign reward scores to individual data examples. Reward models are typically trained using human preference labels, suggesting their capacity for human-like evaluations. Thus, examples preferred by the reward model are typically of high quality. 14 Preprint PPL Reward Output Length Unieval-Und 1.050 > 1.05047 5 . 1.04: 1.04: 1.025 1025 . 102 % x00! « Ps - | 8.000 B 1.000 3 ; B 100 0.975] * 0.975 0.98 0.950. ae : YO 25 30 35 40 45 50 perplexity reward score output_length understandability . o.96| + 260 300 400 ~<500 0.70 0.75 080 085 0.50 Figure 7: Univariate analysis regression plot. we plot 4 indicators value w.r.t. the actual evaluation loss. For every indicator we estimate a univariate linear function between loss and indicator.
2307.06290#52
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
53
12 # A Implementation Details We utilise GPT-4 [3] as the underlying LLM agent unless otherwise stated. We follow a similar prompting structure to Wake et al. [5] as shown in Appendix J. We define the agent’s role, details pertaining to the scene graph environment, the desired output structure and a set of input-output examples which together form the static prompt used for in-context learning. This static prompt is both task- and environment-agnostic and takes up ≈3900 tokens of the LLM’s input. During semantic search, both the 3D Scene Graph and Memory components of the input prompt get updated at each step, while during iterative replanning only the Feedback component gets updated with information from the scene graph simulator. In all cases, the LLM is prompted to output a JSON object containing arguments to call the provided API functions. # B Environments Office Space Single-Floor, Multi-Room Multi-Floor, Multi-Room Figure 4: Large-scale environments used to evaluate SayPlan. The environments span multiple rooms and floors including a vast range of We evaluate SayPlan across a set of two large-scale environments spanning multiple rooms and floors as shown in Figure 4. We provide details of each of these environments below, including a breakdown of the number of entities and tokens required to represent them in the 3DSG:
2307.06135#53
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06290
53
Figure 7: Univariate analysis regression plot. we plot 4 indicators value w.r.t. the actual evaluation loss. For every indicator we estimate a univariate linear function between loss and indicator. # Phenomenon 5 Output length is negatively correlated with data quality. When the number of tokens increases, loss tends to increase which means that data quality tends to decrease. This can be due to the maximum sequence length in language models. LLAMA-2-7B has 4096 maximum sequence length. For some very long instances, it is possible that the sentence is truncated in the middle during preprocessing. # Phenomenon 6 Understandability is negatively correlated with data quality. Unieval-understandability normally represents the complexity of a sentence. When the complexity increases, data quality decreases. This is possibly due to some translation-related instructions in the dataset, which requires the unieval model to command multilingual assessment ability. However, these examples are not common in our evaluation set, which might result in the negative correlation between the two. # E MORE DETAILS ON PARAMETER FITTING
2307.06290#53
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
54
Office: A large-scale office floor, spanning 37 rooms and 151 assets and objects which the agent can interact with. A full and collapsed 3D scene graph representation of this environment are provided in Appendix D and E respectively. This scene graph represents a real-world office floor within which a mobile manipulator robot is present. This allows us to embody the plans generated using SayPlan and evaluate their feasibility in the corresponding environment. Real-world video demonstrations of a mobile manipulator robot executing the generated plan in this office environment are provided on our project site2. Home: An existing 3D scene graph from the Stanford 3D Scene Graph dataset [11] which consists of a family home environment (Klickitat) spanning 28 rooms across 3 floors and contains 112 assets and objects that the agent can interact with. A 3D visual of this environment can be viewed at the 3D Scene Graph project website3. # B.1 Real World Environment Plan Execution To enable real-world execution of the task plans generated over a 3DSG, we require a corresponding 2D metric map within which we can align the posed nodes captured by the 3DSG. At each room node we assume the real robot can visually locate the appropriate assets and objects that are visible to
2307.06135#54
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
54
VLM Overall AR RR FP-S FP-C OpenFlamingo [3] 4.3% 9.1% 11.4% 3.3% 2.5% 1.6% OpenFlamingo v2 [3] 5.7% 11.4% 12.8% 1.4% 5.5% 0.8% MMGPT [14] 16.0% 1.1% 23.9% 20.7% 18.3% 5.2% MiniGPT-4 [46] 23.0% 13.6% 32.9% 8.9% 28.7% 11.2% VisualGLM [9] 33.5% 11.4% 48.8% 27.7% 35.8% 17.6% InstructBLIP [8] 33.9% 21.6% 47.4% 22.5% 33.0% 24.4% LLaVA [27] 36.2% 15.9% 53.6% 28.6% 41.8% 20.0% LLaMA-Adapter [42] 39.5% 13.1% 47.4% 23.0% 45.0% 33.2% µ-G2PT 39.8% 14.8% 46.7% 31.5% 41.8% 34.4% mPLUG-Owl [40]
2307.06281#54
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
54
# E MORE DETAILS ON PARAMETER FITTING Variable Coef. Std err. t value P > |t| 0.0274 - - -0.0078 - -0.3212 -0.1520 0.4421 - 0.061 - - 0.003 - 0.107 0.129 0.168 - 0.453 - - -2.561 - -3.005 -1.180 2.639 - 0.651 - - 0.012∗∗ - 0.003∗∗∗ 0.240 0.009∗∗∗ - ∗∗∗ β0 βP P L βM T LD βRew βLen βN at βCoh βU nd βKnn6 [1] R2=0.522, F -statistic=33.84. [2] Prob(F -statistic)=4.52e-19, Log-Likelihood=435.11. [3] *: p ≤0.1, **:p ≤0.05, ***:p ≤0.01.
2307.06290#54
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
55
# 2sayplan.github.io 33dscenegraph.stanford.edu/Klickitat 13 Entity Type Number of Entities Total Number of Tokens Average Number of Tokens Room Node Asset Node Object Node Agent Node Node Edges Full Graph Collapsed Graph 37 73 78 1 218 407 105 340 1994 2539 15 1843 6731 878 9.19 27.3 32.6 15.0 8.45 16.5 8.36 Table 4: Detailed 3DSG breakdown for the Office Environment. The table summarises the num- ber of different entities present in the 3DSG, the total LLM tokens required to represent each entity group and the average number of tokens required to represent a single type of entity. Entity Type Number of Entities Total Number of Tokens Average Number of Tokens Room Node Asset Node Object Node Agent Node Node Edges Full Graph Collapsed Graph 28 52 60 1 323 464 240 231 1887 1881 15 2584 6598 1817 8.25 36.3 31.35 15 8 14.2 7.57 Table 5: Detailed 3DSG breakdown for the Home Environment. The table summarises the num- ber of different entities present in the 3DSG, the total LLM tokens required to represent each entity group and the average number of tokens required to represent a single type of entity.
2307.06135#55
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
55
µ-G2PT 39.8% 14.8% 46.7% 31.5% 41.8% 34.4% mPLUG-Owl [40] 46.6% 19.9% 56.1% 39.0% 53.0% 26.8% Otter-I [23, 22] Kosmos-2∗ [33] 48.3% 58.2% 22.2% 48.6% 63.3% 59.9% 39.4% 34.7% 46.8% 65.6% 36.4% 47.9% Shikra [5] 60.2% 33.5% 69.6% 53.1% 61.8% 50.4% PandaGPT [37] 30.6% 15.3% 41.5% 22.0% 20.3% 20.4% MiniGPT-4-13B [46] 42.3% 17.0% 62.6% 30.0% 49.8% 19.6% InstructBLIP-13B [8] 43.1% 17.0% 59.5% 36.2% 45.8% 24.0% CP 1.5% 4.0% 18.2% 28.3% 41.5% 41.1% 40.4% 50.6% 49.8%
2307.06281#55
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
55
Table 9: Linear regression parameter estimation results using ordinary least squares (OLS). P > |t| represents p value under student test on each coeficient. Lower p value indicating that the coeficient for this variable is more significant and acceptable. R2 and adjusted R2 represents how well the data is fit using the estimated linear function. As shown in Table 9, we present our statistical regression result for our estimated rule of Equation 4 in Table 9. We select the indicators first through R2 and then through p value, to make sure that our regression result is valid and effective. According to the regression result, Rew, N at and U nd appears to be the most prominent indicators in this rule. This estimation result reveals that U nd is in negative correlation with data quality, while the other three indicators are of positive correlation with data quality. 15 Preprint # F CASE STUDY In this section, we present case studies on INSTRUCTMINING rule and our finetuned model. F.1 RULE SELECTED EXAMPLES We give examples of INSTRUCTMINING selected high-quality data examples and low quality data examples in Table 10, Table 11 and Table 12. # Example 1 Instruction: Teacher: Given reviews from Amazon, classify those review based on their content into two classes: Negative or Positive. Now, understand the problem?
2307.06290#55
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
56
it within the 3DSG. The mobile manipulator robot used for the demonstration consisted of a Franka Panda 7-DoF robot manipulator [47] attached to an LD-60 Omron mobile base [48]. The robot is equipped with a LiDAR scanner to localise the robot both within the real world and the correspond- ing 3DSG. All the skills or affordances including pick, place, open and close were developed using the motion controller from [44] coupled with a RGB-D vision module for grasp detection, and a behaviour tree to manage the execution of each component including failure recovery. Future work could incorporate a range of pre-trained skills (whisking, flipping, spreading etc.) using imitation learning [49, 50] or reinforcement learning [51, 52] to increase the diversity of tasks that SayPlan is able to achieve. # C Tasks
2307.06135#56
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06290
56
# Example 1 Instruction: Teacher: Given reviews from Amazon, classify those review based on their content into two classes: Negative or Positive. Now, understand the problem? Input: This is not waterproof. Its GARBAGE where batteries go in broke, while trying to adj. DO NOT BUY. You turn the end to control it, that loosens the battery cover where water gets in so how is it waterproff???? Output: Yes, I understand the problem. The task is to classify the given review as either negative or positive based on its content. In this instance, the review contains negative content. The reviewer is expressing dissatisfaction with the product and warning others not to buy it. The reviewer specifically mentions that the product is not waterproof and that the battery cover is broken, which allows water to get in. The use of all caps and exclamation marks also indicates a strong negative sentiment. Therefore, I would classify this review as negative. InstructMining score: 0.7902 (Lower, which means better) Table 10: Rule selected example 1. In this example, the assistant completed the task with good interaction. # Example 2
2307.06290#56
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
57
# C Tasks Instruction Family Num Explanation Example Instruction Semantic Search Simple Search Complex Search 30 30 Queries focussed on evaluating the basic semantic search capabilities of SayPlan Abstract semantic search queries which require complex reasoning Find me a ripe banana. Find the room where people are playing board games. Causal Planning Simple Planning Long-Horizon Planning 15 15 Queries which require the agent to perform search, causal reasoning and environment interaction in order to solve a task. Long Horizon planning queries requiring multiple interactive steps Refrigerate the orange left on the kitchen bench. Tobi spilt soda on his desk. Help him clean up. Table 6: List of evaluation task instructions. We evaluate SayPlan on 90 instructions, grouped to test various aspects of the planning capabilities across large-scale scene graphs. The full instruction set is given in Appendix C. 14 We evaluate SayPlan across 4 instruction sets which are classified to evaluate different aspects of its 3D scene graph reasoning and planning capabilities as shown in Table 6:
2307.06135#57
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
57
we observe that these models often generate a full sentence corresponding to one option or a sentence semantically akin to one of the options. This tendency is the primary reason for employing ChatGPT for choice extraction. To improve the usability of multi-modality models to empower diversified applications, pursuing stronger instruction-following ability can be a significant direction. The overall performance of existing VLMs is still limited. The strict CircularEval strategy reveals that the overall performance of existing VLMs is not satisfying. In experiments, all VLMs, except Kosmos-2 and Shikra, failed to reach 50% Top-1 accuracy on MMBench test for multiple choice questions with at most 4 choices. Potential reasons are two fold: 1. Current VLMs are not robust enough to produce the same prediction with slightly different prompts (performance under CircularEval is much worse than the performance under VanillaEval, see Table 4). 2. The capabilities of current VLMs are still quite limited and can be further improved. We hope that MMBench and our evaluation strategy can serve as important resource for the development, iteration and optimization of VLMs in the future.
2307.06281#57
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
57
InstructMining score: 0.7902 (Lower, which means better) Table 10: Rule selected example 1. In this example, the assistant completed the task with good interaction. # Example 2 Instruction: Given the question: Answer the following question given this paragraph: Sunlight contains the complete range of wavelengths of electromagnetic waves. The entire range is called the electromagnetic spectrum. Q: What do you call the entire range of wavelengths of electromagnetic waves? Choices: - electric spectrum - gaseous spectrum - molecular spectrum - electromagnetic spectrum A: The answer is: # Output: Electromagnetic spectrum. InstructMining score: 0.9179 (Medium) Table 11: Rule selected example 2. In this example, the assistant completed the task without suitable interaction with human. 16 Preprint # Example 3 Instruction: It is best to work directly with the manufacturer to get the best price. Add punctuation. Output: It is best to work directly with the manufacturer, to get the best price. InstructMining score: 1.019 (Higher, which means worse) Table 12: Rule selected example 3. In this example, the assistant didn’t complete the task. F.2 LANGUAGE MODEL RESPONSE EXAMPLES In this section, we present some prompt examples of our finetuned model. We leverage tasks from MT-BENCH dataset to demonstrate the model ability from various perspectives. Example 1 Writing ability
2307.06290#57
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
58
14 We evaluate SayPlan across 4 instruction sets which are classified to evaluate different aspects of its 3D scene graph reasoning and planning capabilities as shown in Table 6: Simple Search: Focused on evaluating the semantic search capabilities of the LLM based on queries which directly reference information in the scene graph as well as the basic graph-based reasoning capabilities of the LMM. Complex Search: Abstract semantic search queries which require complex reasoning. The infor- mation required to solve these search tasks is not readily available in the graph and has to be inferred by the underlying LLM. Simple Planning: Task planning queries which require the agent to perform graph search, causal reasoning and environment interaction in order to solve the task. Typically requires shorter horizon plans over single rooms. Long Horizon Planning: Long Horizon planning queries require multiple interactive steps. These queries evaluate SayPlan’s ability to reason over temporally extended instructions to investigate how well it scales to such regimes. Typically requires long horizon plans spanning multiple rooms. The full list of instructions used and the corresponding aspect the query evaluates are given in the following tables: # C.1 Simple Search # C.1.1 Office Environment # Instruction
2307.06135#58
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
58
Cross-Instance Understanding and Logic Reasoning are extremely difficult. An examination of our evaluation results reveals that cross-instance understanding—specifically relation reasoning (RR) and cross-instance fine-grained perception (FP-C)—poses a significant challenge for existing Visual Language Models (VLMs). The average accuracy for cross-instance fine-grained perception across all models is 27.7% and 23.4% on the dev and test splits respectively, significantly lower than that of single-instance fine-grained perception (FP-S). A similar disparity can be observed between relation reasoning (RR) and attribute reasoning (AR) as evidenced in Tables 5 and 6. Furthermore, when compared to other L-2 abilities, the logical reasoning (LR) capability of existing models appears strikingly weak, with an average accuracy of only 17.8%. The results indicate that improving the cross-instance understanding and logic reasoning capabilities of VLMs can be a significant and promising direction.
2307.06281#58
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
58
In this section, we present some prompt examples of our finetuned model. We leverage tasks from MT-BENCH dataset to demonstrate the model ability from various perspectives. Example 1 Writing ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: Edit the following paragraph to correct any grammatical errors: She didn’t remembre where is her purse, so I thinks its in the car but he’s say it’s on kitchen table but he are not sure, and then they asked me to looking for it, she’s say, "Can you?", and I responds with, "Maybe, but ain’t no sure," and he not heard me, and, "What?", he asks, "Did you found it?".
2307.06290#58
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
59
Find me object K31X. > unguided search with no semantic cue Find me a carrot. > semantic search based on node name Find me anything purple in the postdoc > semantic search with termination conditioned on at- bays. tribute Find me a ripe banana. > semantic search with termination conditioned on at- tribute Find me something that has a screwdriver > unguided search with termination conditioned on in it. children One of the offices has a poster of the > semantic search with termination conditioned on Terminator. Which one is it? children I printed a document but I don’t know > semantic search based on parent which printer has it. Find the document. I left my headphones in one of the meeting — > semantic search based on parent rooms. Locate them. Find the PhD bay that has a drone in it. > semantic search with termination conditioned on children Find the kale that is not in the kitchen. > semantic search with termination conditioned on a negation predicate on parent Find me an office that does not have a > semantic search with termination conditioned on a cabinet. negation predicate on children Find me an office that contains a cabinet,a = [> semantic search with termination conditioned on a desk, and a chair. conjunctive query on
2307.06135#59
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
59
The introduction of object localization data is anticipated to enhance model performance. Among various models, Kosmos-2 and Shikra notably excel, offering significant improvements across almost all L-2 capabilities, particularly in logical reasoning and cross-instance fine-grained perception. Compared to other models, both Kosmos-2 and Shikra incorporate object localization within their training datasets. The integration of localization data infuses more detailed object- specific information into the models, allowing them to comprehend the dynamic states of objects more effectively. Moreover, it aids in elucidating relationships and interactions between distinct 12 objects. This strategy contributes substantively to the enhancement of the models’ capabilities in logical reasoning and cross-instance fine-grained perception. # 6 Conclusion
2307.06281#59
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
59
Assistant: "Sure, I can help you with that. Here’s the corrected paragraph: "She didn’t remember where her purse was, so I think it’s in the car but he’s not sure, and then they asked me to look for it, she said, ’Can you?’, and I responded with, ’Maybe, but I’m not sure,’ and he didn’t hear me, and then he asked, ’What?’, he asked, ’Did you find it?’" I made the following changes: 1. Added a comma after "car" to separate the two independent clauses in the sentence. 2. Replaced "ain’t" with "are not sure" to improve the sentence’s grammar and vocabulary. 3. Added a comma after "he’s not sure" to separate the two items in the list. 4. Added a comma after "she said" to separate the quotation marks from the rest of the sentence. 5. Added a comma after "he asked" to separate the two items in the list. 6. Added a period after "he asked" to indicate the end of the sentence". Table 13: Example of model writing ability. Example 2 Roleplay ability
2307.06290#59
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
60
on children Find me an office that contains a cabinet,a = [> semantic search with termination conditioned on a desk, and a chair. conjunctive query on children Find a book that was left next to a robotic > semantic search with termination conditioned on a gripper. sibling Luis gave one of his neighbours a stapler. > semantic search with termination conditioned on a Find the stapler. sibling There is a meeting room with a chair butno > semantic search with termination conditioned on a table. Locate it. conjunctive query with negation
2307.06135#60
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
60
12 objects. This strategy contributes substantively to the enhancement of the models’ capabilities in logical reasoning and cross-instance fine-grained perception. # 6 Conclusion The inherent limitations of traditional benchmarks (VQAv2, COCO Caption, etc.) and subjective benchmarks (mPLUG-Owl, etc.), underscore the need for an innovative evaluation paradigm in vision- language understanding. To address this, we introduce MMBench , a multi-modality benchmark that proposes an objective evaluation pipeline of 2,974 multiple-choice questions covering 20 ability dimensions. To produce robust and reliable evaluation results, we introduce a new evaluation strategy named CircularEval. The strategy is much stricter than the vanilla 1-pass evaluation and can yield reliable evaluation results with an affordable cost. Additionally, we leverage ChatGPT to compare model predictions with target choices, which enables a fair comparison among VLMs with different levels of instruction-following capabilities. Compreshensive studies on both MMBench and public benchmark indicate the feasiblility of using ChatGPT as the judge. We hope MMBench can aid the research community in optimizing their models and inspire future progress. # 7 Acknowledgement
2307.06281#60
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
60
Table 13: Example of model writing ability. Example 2 Roleplay ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: Embrace the role of Sheldon from "The Big Bang Theory" as we delve into our conversation. Don’t start with phrases like "As Sheldon". Let’s kick things off with the following question: "What is your opinion on hand dryers?" Assistant: As Sheldon, I am a fan of hand dryers. They are efficient, fast, and reduce the amount of time spent drying your hands after washing them. I also appreciate the fact that they are environmentally friendly as they use minimal water compared to paper towel dispensers. However, I do have some reservations about the noise level of some hand dryers, which can be quite loud and disruptive in public restrooms. Overall, I think hand dryers are a great option for public restrooms and I would recommend them to anyone looking for a fast and eco-friendly way to dry their hands Table 14: Example of model roleplay ability. 17 # Preprint Example 3 Reasoning ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions.
2307.06290#60
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06281
61
# 7 Acknowledgement We would like to express our great gratitude to community contributors for their contributions. Li Dong and Zhiliang Peng from Microsoft Research Asia have provided the inference results of Kosmos-2 on MMBench . Furthermore, we are thankful to Peng Gao and Peng Xu from Shanghai AI Laboratory, for supplying the inference results of µ-G2PT and LLaMA-Adapter. We are also grateful to Zhao Zhang and Keqin Chen from SenseTime for their provision of the inference results of Shikra. Moreover, our gratitude goes to Jiaye Ge, also at the Shanghai AI Laboratory, for orchestrating the data collection process and ensuring the timely delivery of the dataset. Yixiao Fang also merits our appreciation for his assistance in inferring MMGPT on MMBench. Lastly, we are indebted to Jun Xu and Xiaoqin Du from Shanghai AI Laboratory, whose support was instrumental in the construction of the MMBench webpage and leaderboard. 13 # A GPT-based VQA evaluation Prompt used for GPT-based GQA [20] evaluation: # gpt_query_template = (
2307.06281#61
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
61
Example 3 Reasoning ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: One morning after sunrise, Suresh was standing facing a pole. The shadow of the pole fell exactly to his right. Can you tell me the direction towards which the shadow was pointing - east, south, west, or north? Explain your reasoning steps. Assistant: To determine the direction of the shadow, we need to consider the position of the sun in the sky. Since sunrise occurs before sunset, we can assume that the sun was rising in the east. Therefore, the shadow would be pointing towards the east. Reasoning steps: 1. Suresh was standing facing a pole. 2. The shadow of the pole fell exactly to his right. 3. The sun rises in the east. 4. The shadow would be pointing towards the east Table 15: Example of model reasoning ability. Example 4 Math ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: A tech startup invests $8000 in software development in the first year, and then invests half of that amount in software development in the second year. What’s the total amount the startup invested in software development over the two years?
2307.06290#61
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
62
Find me a FooBar. Find me a bottle of wine. Find me a plant with thorns. Find me a plant that needs watering. Find me a bathroom with no toilet. The baby dropped their rattle in one of the rooms. Locate it. I left my suitcase either in the bedroom or the living room. Which room is it in. Find the room with a ball in it. I forgot my book on a bed. Locate it. Find an empty vase that was left next to sink. Locate the dining room which has a table, chair and a baby monitor. Locate a chair that is not in any dining room. Ineed to shave. Which room has both a razor and shaving cream. Find me 2 bedrooms with pillows in them. Find me 2 bedrooms without pillows in them. juctive query > unguided search with no semantic cue > semantic search based on node name > semantic search with termination conditioned on at- tribute > semantic search with termination conditioned on at- tribute > semantic search with termination conditioned on a negation predicate > semantic search based on node name > semantic search based on node name > semantic search based on node name > semantic search based on node name > semantic search with termination conditioned on sib- ling > semantic search with termination conditioned on
2307.06135#62
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
62
13 # A GPT-based VQA evaluation Prompt used for GPT-based GQA [20] evaluation: # gpt_query_template = ( "You are an AI assistant who will help me to evaluate the response given the question and the correct answer. " "To mark a response, you should output a single integer between 1 and 5 (including 1, 5). " "5 means that the response perfectly matches the answer. " "1 means that the response is completely different from the answer. " "Example 1: " "Question: Is it overcast? " + "Answer: no " + "Response: yes " + "Your mark: 1 " "Example 2: " "Question: Who is standing at the table? " + "Answer: woman " + "Response: Jessica " + "Your mark: 3 " "Example 3: " "Question: Are there drapes to the right of the bed? " + "Answer: yes " + "Response: yes " + "Your mark: 5 " "Your Turn: " "Question: {} " + "Answer: {} " + "Response: {} " + "Your mark: " )
2307.06281#62
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
62
Assistant: To find the total amount the startup invested in software development over the two years, we need to calculate the investment for the first year and then add it to the investment for the second year. First year investment: $8000 Second year investment: Half of the first year investment = $8000/2 = $4000 Total investment for two years: First year investment + Second year investment = $8000 + $4000 = $12000 So, the total amount the startup invested in software development over the two years was $12000 Table 16: Example of model math ability. 18 # Preprint Example 5 Extraction ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions.
2307.06290#62
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
63
node name > semantic search based on node name > semantic search based on node name > semantic search with termination conditioned on sib- ling > semantic search with termination conditioned on con- > semantic search with termination conditioned on negation predicate > semantic search with termination conditioned on children > semantic search with multiple returns > semantic search with multiple returns based on nega- tion predicate
2307.06135#63
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06290
63
Example 5 Extraction ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: Given these categories - Literature, History, Science, and Art. Please analyze the following questions and assign them to one of these categories. In your response, refrain from uttering any extraneous words. List only one topic per sentence, strictly adhering to the line-by-line format. 1. Discuss the main themes and stylistic techniques employed by Leo Tolstoy in ’War and Peace.’ How do they align with the wider social context of 19th-century Russia? 2. Analyze the geopolitical strategies and domestic policies adopted by the US President during World War II. How did these actions shape the post-war international order? 3. Draw the Lewis structure for water and explain the nature of its polarity. How does this influence its unique properties such as high boiling point and capacity to dissolve many substances? 4. Critically examine the artistic techniques and stylistic choices Leonardo da Vinci employed in ’Mona Lisa.’ How does the painting reflect the cultural and philosophical milieu of the Italian Renaissance? # Assistant: Literatur Table 17: Example of model extraction ability. Example 6 STEM ability
2307.06290#63
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06281
64
"You are an AI assistant who will help me to evaluate the response given the question and a list of correct answers. " "To mark a response, you should output a single integer between 1 and 5 (including 1, 5). " "5 means that the response perfectly matches one of the correct answers. " "1 means that the response is completely different from every correct answer. " "Example 1: " "Question: what is the time? " + "Answer: ['5:41', '8:00', '5:40', '5:42'] " "Response: 10:15 " + "Your mark: 1 " "Example 2: " "Question: what brand of watch is that? " + "Answer: ['ap', 'unanswerable', 'af', 'audemars'] " "Response: audemars pigu " + "Your mark: 4 " "Example 3: " "Question: how many items can you get for $5? " + "Answer: ['3', '3 for $5'] " "Response: 3 " + "Your mark: 5 " "Your Turn: " "Question: {} " + "Answer: {} " + "Response: {} " + "Your mark: " )
2307.06281#64
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
64
# Assistant: Literatur Table 17: Example of model extraction ability. Example 6 STEM ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: What is the central dogma of molecular biology? What processes are involved? Who named this? Assistant: The central dogma of molecular biology is a set of principles that describe the flow of genetic information within a living organism. The central dogma is as follows: 1. DNA (Deoxyribonucleic acid) is the genetic material that carries the instructions for making proteins. 2. RNA (Ribonucleic acid) is formed from DNA and carries the genetic information to the cell’s ribosomes, where proteins are made. 3. The sequence of nucleotides in RNA determines the sequence of amino acids in the protein. The central dogma of molecular biology was first proposed by Francis Crick and James D. Watson in their 1953 paper, "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid." They proposed that DNA is the genetic material that carries the information for making proteins, and RNA is formed from DNA and carries the genetic information to the cell’s ribosomes, where proteins are made. The sequence of nucleotides in RNA determines the sequence of amino acids in the protein.
2307.06290#64
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
65
Instruction Find object J64M. J64M should be kept at below 0 degrees Celsius. Find me something non vegetarian. Locate something sharp. Find the room where people are playing board games. Find an office of someone who is clearly a fan of Arnold Schwarzenegger. There is a postdoc that has a pet Husky. Find the desk that’s most likely theirs. One of the PhD students was given more than one complimentary T-shirts. Find his desk. Find me the office where a paper attachment device is inside an asset that is open. There is an office which has a cabinet containing exactly 3 items in it. Locate the office. There is an office which has a cabinet containing a rotten apple. The cabinet name contains an even number. Locate the office. Look for a carrot. The carrot is likely to be in a meeting room but I’m not sure. Find me a meeting room with a RealSense camera. Find the closest fire extinguisher to the manipulation lab. Find me the closest meeting room to the kitchen. Either Filipe or Tobi has my headphones. Locate it. & sem, antic search guided by implicit world knowledge (knowledge not directly encoded in graph) > sem > ser ability world ability
2307.06135#65
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
65
) Many MiniGPT-4 GQA answers can not be matched with exact matching, but can be matched with GPT-based evaluation. Here we provide some examples: Table 7: MiniGPT-4 GQA answers (different from GT) that can be matched by GPT-based evaluation. GPT Score GroundTruth Question MiniGPT-4 Answer right girl plastic bike Yes off leather bag is on right side woman is wearing shirt pen is made of plastic## bicycle microwave is silver and rectangular silver lamp appears to be off ## 5 5 5 4 5 5 On which side of the picture is the leather bag? Who is wearing a shirt? What is the pen made of? What does the male person ride on? Is the microwave silver and rectangular? How does the silver lamp appear to be, on or off? # B Evaluation Results at L-3 Granularity In Tables 8 to 13, we present the performance of VLMs on all 20 L-3 abilities. We noticed that the rankings on the L-3 abilities are more diversified. Besides the leading VLMs Shikra and Kosmos-2, other VLMs, including mPKUG-Owl, InstructBLIP-13B, MiniGPT-4-13B, Otter-I, also achieve the first place in one / more sub-tasks on the dev / test split of OmniMMBench.
2307.06281#65
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
65
The central dogma of molecular biology is a fundamental concept in biology and has been instrumental in the understanding of genetic information flow, gene expression, and protein synthesis. It has also led to the development of various molecular biology techniques, such as DNA sequencing, RNA interference, and recombinant DNA technology Table 18: Example of model STEM ability. 19 # Preprint Example 7 Humanities ability System Message: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. User: Create a lesson plan that integrates drama, mime or theater techniques into a history class. Duration: 3 class periods (each lasts for 45 minutes) for 3 days Topic: Opium Wars between China and Britain Grade level: 9-10 Assistant: Alright, little buddy! So, we have a history class where they’re learning about the Opium Wars between China and Britain. We want to add some fun and make it more interesting with drama, mime, or theater techniques. Here’s what we can do: 1. Day 1: Introduce the topic and the three techniques. Explain that they will be using drama, mime, and theater techniques to learn about the Opium Wars. 2. Day 2: Have the students work in groups to research the Opium Wars. They can use their phones, tablets, or laptops to find information and pictures.
2307.06290#65
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
66
or Tobi has my headphones. Locate it. & sem, antic search guided by implicit world knowledge (knowledge not directly encoded in graph) > sem > ser ability world ability world > ser ability world & sem: > semantic antic search with termination conditioned on im- plicit world knowledge > unguided search with termination conditioned on im- plicit world nowledge nantic search with termination conditioned on to deduce context from node children using nowledge (“board game” is not part of any node name or attribute in this graph) search with termination conditioned on to deduce context from node children using knowledge nantic search with termination conditioned on to deduce context from node children using <nowledge antic search with termination conditioned on the number of children > ser node d & sem: nantic search with termination conditioned on escendants and their attributes antic search with termination conditioned on the number of children > sem & sem > sem has a r antic search guided by numerical properties antic search guided by user provided bias antic search that has no result (no meeting room ealsense camera in the graph) > search guided by node distance > sear > eval ch guided by node distance uating constrained search, early termination once the two office are explored
2307.06135#66
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
66
# C Google Bard vs. OpenSource VLMs: A Quantitative Comparison In March 2023, Google launched Bard, a lightweight and optimized version of LaMDA based on Transformer. Similar to ChatGPT, Bard is a close-sourced model and provide service to users via web UI. In July 2023, Google announced the latest update of Bard, which is capable of processing image input. In order to provide an overview of Bard’s multi-modal ability, we evaluate it on the test split of MMBench as below and compare it with other state-of-the-art VLMs. The test split of MMBench includes 1798 questions. During testing, we find that Bard refuses to process images containing human faces. For a fair comparison, we remove questions that Bard refuse to answer and discard questions that evaluate four human-related capabilities (Image Emotion, 14 js Otter- mm Shikra mm Bard “ ey &. 3 Ss. g oie le te ye dr ater eS . Se te es # 4 OF oY oS # Accuracy Figure 7: Performance of Bard compared with Shikra and Otter-I on the filtered subset. Identity Reasoning, Social Relation, and Action Recognition) in the test split. After filtering, we build a subset of 1226 samples and 16 leaf ability dimensions.
2307.06281#66
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
66
2. Day 2: Have the students work in groups to research the Opium Wars. They can use their phones, tablets, or laptops to find information and pictures. 3. Day 3: Divide the class into three groups: drama, mime, and theater. Each group will create a short skit, mime, or theater performance based on what they learned about the Opium Wars. 4. Day 4: Each group will present their performance to the class. The class will watch and give feedback to the other groups. 5. Day 5: Review the key points of the Opium Wars and discuss how the drama, mime, or theater techniques helped them understand the topic better. Remember, this is just an example, and you can change it to fit your class better. Have fun learning and creating Table 19: Example of model Humanities ability. G RANDOM SAMPLING EXPERIMENT RESULTS We provide the collected experimental results of 129 randomly selected subsets from Section 3.1. These data can be further used for other analysis purposes. 20 Preprint input_length output_length understandability naturalness coherence reward mtld knn_6 PPL loss
2307.06290#66
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06281
67
Identity Reasoning, Social Relation, and Action Recognition) in the test split. After filtering, we build a subset of 1226 samples and 16 leaf ability dimensions. We compare Bard with two state-of-the-art VLMs that perform well on MMBench, namely Shikra and Otter-I. The result is shown in Figure 7. Bard attains an impressive overall accuracy of 51%, positioning itself among the top-tier VLMs proposed to date. Notably, Bard excels in answering questions that involve common sense reasoning. It achieves 62.3% accuracy on Nature Relation questions and 45.2% accuracy on Physical Relation questions, outperforming its counterparts, e.g. Otter-I and Shikra, by a substantial margin. Meanwhile, an analysis reveals that Bard’s performance is comparatively lower in tasks requiring spatial perception, such as Spatial Relationship and Object Localization. This observation aligns with expectations, considering that Shikra incorporates visual grounding tasks into its training data to enhance its localization capabilities, a facet potentially not integrated into Bard’s training process.
2307.06281#67
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
67
20 Preprint input_length output_length understandability naturalness coherence reward mtld knn_6 PPL loss 102.729 69.623 32.936 65.222 24.679 46.027 69.840 42.879 75.580 45.262 57.457 26.201 49.905 26.275 62.725 39.806 70.270 46.716 50.895 45.613 30.844 41.662 60.178 69.810 41.598 101.613 64.637 53.798 46.261 52.496 80.489 105.527 41.803 82.896 36.064 58.510 41.349 62.921 56.596 25.765 78.845 71.564 87.063 28.999 28.405 61.423 18.084 23.617 41.173 44.796 24.685 43.223 56.828 52.652 66.962 61.466 77.179 86.817
2307.06290#67
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
68
Ineed something to access ChatGPT. Where should I go? Find the livingroom that contains the most electronic devices. Find me something to eat with a lot of potassium. I left a sock in a bedroom and one in the living room. Locate them. They should match. Find me a potted plant that is most likely a cactus. Find the dining room with exactly 5 chairs. Find me the bedroom closest to the home office. Find me a bedroom with an unusual amount of bowls. Which bedroom is empty. Which bathroom has the most potted plants. The kitchen is flooded. Find somewhere I can heat up my food. Find me the room which most likely belongs to a child 15 guests are arriving. Locate enough chairs to seat them. A vegetarian dinner was prepared in one of the dining rooms. Locate it. My tie is in one of the closets. Locate it. > semantic search guided by implicit world knowledge > semantic search with termination conditioned on children with indirect information > semantic search with termination conditioned on im- plicit world know ledge > semantic search with multiple returns > semantic searc! tioned on attribute > semantic searc! tioned on quantity > semantic searc! tioned on node dis > semantic searc! tioned
2307.06135#68
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
68
To complement the quantitative analysis in Figure 8, we also provide some qualitative examples of Bard. Some good cases are demonstrated in Figure 8. In the left-hand example, Bard adeptly processes intricate scenes, distills key information, and arrives at a reasonable conclusion. Notably, the majority of VLMs subjected to our testing fail to deliver the correct response to this particular question. In the right-hand example, Bard recognizes the correct concept from cartoon, sidestepping any potential confusion arising from the harmonious interaction between a snake and a mouse. This highlights Bard’s exceptional common sense reasoning ability. In Figure 9, we present illustrative examples that highlight Bard’s performance shortcomings. These instances originate from both image style and image quality tasks. The former entails the model to discern image categories, while the latter involves assessing visual attributes, such as brightness, across a pair of images. A shared characteristic between these tasks is the insignificance of image content concerning the task’s objectives. Bard performs bad on the two tasks, achieving 50% and 7% accuracy on each tasks respectively. The accompanying tables within these cases visually demonstrate Bard’s tendency to excessively focus on semantic concepts and depicted objects within the provided text and image, leaving it struggling to effectively address inquiries regarding holistic styles and attributes.
2307.06281#68
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
68
286.592 432.224 356.696 351.245 311.668 199.094 358.533 316.451 460.983 388.129 367.983 316.794 323.043 421.357 393.420 445.083 262.376 429.933 347.113 361.398 309.751 278.702 301.275 311.997 257.505 354.567 299.082 367.029 352.759 291.431 419.792 269.572 280.630 278.274 336.507 244.927 284.143 361.994 451.301 373.152 339.202 267.444 342.297 323.759 324.233 281.118 317.710 296.796 205.367 329.179 477.961 365.828 359.443 254.456 371.096 355.291 294.764 346.588
2307.06290#68
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
69
> semantic searc! tioned on attribute > semantic searc! tioned on quantity > semantic searc! tioned on node dis > semantic searc! tioned on quantity > semantic searc! tioned on quantity > semantic searc! h wit wit of chi h wit tance wit of chi h wit of chi wil a h a h a h tioned on quantity of chi > semantic search guide > semantic search wit ability to deduce conte: world knowledge > semantic search with tioned on the quantity of > semantic search wit world knowledge termination implicitly condi- termination implicitly condi- dren termination implicitly condi- termination implicitly condi- dren termination implicitly condi- dren termination implicitly condi- dren d by negation h termination conditioned on xt from node children using termination implicitly condi- specified node selection criteria based on > evaluating constrained search that has no result, ter- mination after closets
2307.06135#69
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
69
Last but not least, in all the aforementioned examples, Bard consistently delivers well-structured responses, frequently utilizing bullet-point lists and tables to enhance clarity. Moreover, across a majority of the questions, Bard adheres to a consistent response format: presenting the predicted option initially, subsequently offering a comprehensive rationale, and culminating by enumerating the reasons for the incorrectness of alternative choices. From the perspective of being a chatbot, Bard undeniably stands out as one of the most exceptional multi-modal chatbots. # D Examples in MMBench In the Figures 10 to 15, we illustruate some examples in MMBench , grouped by the L-2 abilities. # E Definition of Each Ability In Appendix E, we introduce the definition of each ability. 15
2307.06281#69
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
70
Table 10: Complex Search Instructions. Evaluated in Home Environment. 18 # C.3 Simple Planning # Instruction Close Jason’s cabinet. Refrigerate the orange left on the kitchen bench. Take care of the dirty plate in the lunchroom. Place the printed document on Will’s desk. Peter is working hard at his desk. Get him a healthy snack. Hide one of Peter’s valuable belongings. Wipe the dusty admin shelf. There is coffee dripping on the floor. Stop it. Place Will’s drone on his desk. Move the monitor from Jason’s office to Filipe’s. My parcel just got delivered! Locate it and place it in the appropriate lab. Check if the coffee machine is working. Heat up the chicken kebab. Something is smelling in the kitchen. Dispose of it. Throw what the agent is holding in the bin. Table 11: Simple Planning Instructions. Evaluated in Office Environment. # C.4 Long Horizon Planning # Instruction
2307.06135#70
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
70
How many directions do the branching roads from the tallest main road in the image lead to in total? The answer is (D).6. If you look closely at the image, you can see that the tallest main road branches off into & different directions. There ave two roads that branch off to the left, two roads that branch off to the right, and one road that branches off straight ahead. Heve ave the @ directions that the branching roads lead to: * North * Northeast * Southeast * South * Southwest * Northwest There are several options for the number of directions the branching roads from the tallest main road in the image lead to. However, the image itself does not provide a definitive answer. The image shows a highway with multiple lanes and multiple roads branching off from it, but the exact number of directions cannot be determined without more information. In nature, what's the relationship between these two creatures? A. Predatory relationships B. Competitive relationships C. Parasitic relationships D. Symbiotic relationship "ANSWER. A** The answer is **(A) Predatory relationships™. In nature, snakes are predators and mice are prey. Snakes are carnivores, which means they eat other animals. Mice are
2307.06281#70
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
71
Heat up the noodles in the fridge, and place it somewhere where I can enjoy it. Throw the rotting fruit in Dimity’s office in the correct bin. Wash all the dishes on the lunch table. Once finished, place all the clean cutlery in the drawer. Safely file away the freshly printed document in Will’s office then place the undergraduate thesis on his desk. Make Niko a coffee and place the mug on his desk. Someone has thrown items in the wrong bins. Correct this. Tobi spilt soda on his desk. Throw away the can and take him something to clean with. I want to make a sandwich. Place all the ingredients on the lunch table. A delegation of project partners is arriving soon. We want to serve them snacks and non-alcoholic drinks. Prepare everything in the largest meeting room. Use items found in the supplies room only. Serve bottled water to the attendees who are seated in meeting room 1. Each attendee can only receive a single bottle of water. Empty the dishwasher. Place all items in their correct locations Locate all 6 complimentary t-shirts given to the PhD students and place
2307.06135#71
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
71
relationships™. In nature, snakes are predators and mice are prey. Snakes are carnivores, which means they eat other animals. Mice are herbivores, which means they eat plants. This means that snakes and mice have a predator-prey relationship. In this relationship, the snake is the predator and the mouse is the prey. The snake hunts and kills the mouse, and then the snake eats the mouse. This relationship is essential for the balance of nature, as it helps to keep the population of mice in check. The other options are not correct. Competitive relationships occur when two species compete for the same resources. Parasitic relationships occur when one organism benefits frova another organism at the expense of the other organism. Symbiotic relationships occur when two organisms benefit from each other. In the image, the snake is clearly about to eat the mouse. This shows that the relationship between these two creatures is predatory. In nature, the relationship between the two creatures, the snake and the frog, is a symbiotic relationship. This means that they have a mutually beneficial interaction, where each creature provides something to the other that is advantageous to both. In this case, the
2307.06281#71
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
72
a single bottle of water. Empty the dishwasher. Place all items in their correct locations Locate all 6 complimentary t-shirts given to the PhD students and place them on the shelf in admin. I’m hungry. Bring me an apple from Peter and a pepsi from Tobi. I’m at the lunch table. Let’s play a prank on Niko. Dimity might have something. There is an office which has a cabinet containing a rotten apple. The cabinet name contains an even number. Locate the office, throw away the fruit and get them a fresh apple.
2307.06135#72
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
72
relationship. This means that they have a mutually beneficial interaction, where each creature provides something to the other that is advantageous to both. In this case, the snake might Paice the frog with a meal, such as a mouse or a small rodent, while the frog benefits from the snake's venom, which can help protect the frog from predators. This mutualistic relationship is essential for the survival of both species in their natural habitat.
2307.06281#72
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
73
# Table 12: Long-Horizon Planning Instructions. Evaluated in Office Environment. 19 # D Full 3D Scene Graph: Office Environment fcomplimediry —tshirte. a hd i | ) Wivuwweuws a complimediyy tshirt er robotiatinget robot (lawns Figure 5: 3D Scene Graph - Fully Expanded Office Environment. Full 3D scene graph exposing all the rooms, assets and objects available in the scene. Note that the LLM agent never sees all this information unless it chooses to expand every possible node without contraction. 20 # E Contracted 3D Scene Graph: Office Environment [Room [SS object Ms oPose GE) Agent 21 y 4) = a Syagk tae? y a) =| wy bor eh) Ad a . 4 Gd ie Seysenaene Sod ene eld wh gesreayeane a| PEYTEYTEITEN a mobile _¢@Botics_lab % Hes =e e es ay Vy ka ta aa 28 #8 20S POs Oe a
2307.06135#73
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06290
73
68.272 70.568 59.245 66.159 56.796 55.288 65.081 59.883 73.243 63.611 62.603 56.274 61.393 62.695 67.242 66.142 62.556 65.989 61.344 65.737 56.939 56.283 61.846 61.973 57.434 69.875 62.023 65.440 63.404 60.040 70.305 67.100 57.677 65.896 61.275 58.289 58.760 63.536 68.224 60.507 67.615 63.527 67.469 57.809 60.238 60.340 57.264 56.636 53.145 60.609 65.346 63.779 62.824 60.348 67.986 64.751 65.810 69.805
2307.06290#73
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
74
Figure 6: 3D Scene Graph - Contracted Office Environment. Contracted 3D scene graph expos- ing only the highest level within the hierarchy - room nodes. This results in an 82.1% reduction in the number of tokens required to represent the scene before the semantic search phase. 21 # F Semantic Search Evaluation Results - Full listings of the generated semantic search sequences for the evaluation instruction sets are provided on the following pages 22 Find me object K31X. mobile_robotics_lab manipulation_lab agriculture_lab robot_loungel robot_lounge2 peters_office tobis office nikos_office | mobile_robotics_lab manipulation_lab printing_zonel | admin printing_zone2 supplies_station Find me a carrot. Find me anything purple in the postdoc bays. postdoc_bay1 postdoc_bay1 Find me a ripe banana. kitchen kitchen Find me something that has a screwdriver in it. agriculture_lab | robot_loungel mobile_robotics_lab | manipulationlab | robot_lounge2 supplies_station printing_zonel printing_zone2 robot_loungel robot_lounge2 One of the offices has a poster of the Terminator. Which one is it?
2307.06135#74
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
74
Which art style is evident in this image? A. watercolor B. oil paint C. vector art D. early renaissance "ANSWER. B** The art style evident in the image is **(A) watercolor**. The painting has a loose, painterly quality that is characteristic of watercolor. The colors are vibrant and the brushstrokes ave visible. There is also a sense of movement and energy in the painting, which is also characteristic of watercolor. The other options are not as likely. Oil paint is a thicker medium that would not create the same loose, painterly quality. Vector art is a digital art form that is typically created using computer software. Early Renaissance art is characterized by its realism and attention to detail, which is not present in the image. Here is a table comparing the different art styles: Art style | Characteristics | Watercolor | Loose, painterly quality Oil paint | Thicker medium, creates more realistic images | Vector art | Digital art form, created eae) computer software | Early Renaissance | Realism, attention to eail | | hope this helps! Let me know if you have any other questions. QUESTION. Which image is more brightful? A. The first image B. The second image **ANSWER. B** The first image is
2307.06281#74
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
75
One of the offices has a poster of the Terminator. Which one is it? peters_office tobis_office nikos_office luis_office wills_office dimitys office filipes_office | chris_office aarons_office # michaels # of: I printed a document, but I dont know which printer has it. Find the document. printing_zonel pr: printing_zone2 ting_zone2 printing_zone2 I left my headphones in one of the meeting rooms. Locate them. # meeting_room1 # a # meeting_room1 # meeting_room2 # meeting_room4 Find the PhD bay that has a drone in it. Ce iene phd_bay1 phd_bay2 phd_bay3 Find the kale that is not in the kitchen. # mobile_robotics_lab # cafeteria # griculture_lab Find me an office that does not have a cabinet. CELE a wills_office luis_office filipes_office ajays_office dimitys office peters_office lauriannes_office chris_office 23 Find me an office that contains a cabinet, a desk and a chair.
2307.06135#75
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
75
me know if you have any other questions. QUESTION. Which image is more brightful? A. The first image B. The second image **ANSWER. B** The first image is brighter than the second image. The moon is the brightest object in the first image, and it is surrounded by a dark background. The apples in the first image are also brighter than the apples in the second image. The second image is darker overall, and the moon is not as bright. Here is a table that compares the brightness of the two images: tmage | Moon | Apples | Background | Fil ing rhe | ore | ar Second image | Dark | Darker | Darker | Therefore, the answer to the question is (A). The first image is brighter.
2307.06281#75
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
76
23 Find me an office that contains a cabinet, a desk and a chair. peters_office tobis_office filipes office wills_office luis_office dimity_office peters_office lauriannes_office ajay_office chris_office tobis_office Find me a book that was left next to a robotic gripper. # mobile_robotics_lab Luis gave one of his neighbours a stapler. Find the stapler. luis_office wills_office luis_office wills_office There is a meeting room with a chair but no table. Locate it. # veto on # meeting_room1 # meeting_room1 Table 13: Simple Search Office Environment Evaluation. Sequence of Explored Nodes for Sim- ple Search Office Environment Instructions. 24 Find object J64M. J64M should be kept at below 0 degrees Celsius. Find me something non vegetarian. Locate something sharp. kitchen mobile_robotics_lab manipultion_lab agriculture_lab peters_office tobis_office {manipulation iab | >[ nikos_office : fi kitchen cafeteria } if agriculture_lab | printing_zonel } ( supplies_station | | printing_zone2 ) f admin )>( peters_office | f tobis_office }>( nikos_office |
2307.06135#76
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06290
76
Table 20: Random experiment results 1. 21 0.980 0.984 0.985 0.998 0.970 0.960 0.983 0.968 1.048 0.992 0.987 0.993 0.974 0.982 0.987 0.987 0.971 0.994 0.986 0.978 0.975 0.980 0.977 0.969 0.975 0.999 0.992 0.982 0.983 0.972 1.004 0.989 0.968 0.985 0.983 0.964 0.966 0.979 0.990 0.989 0.979 0.979 0.991 0.981 0.975 0.963 0.989 0.985 0.981 0.989 0.983 0.982 0.989 0.977 0.992 0.983 0.985 0.990 Preprint input_length output_length understandability naturalness coherence reward mtld knn_6 PPL loss
2307.06290#76
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
77
Find the room where people are playing board games.. presentation_lounge cafeteria meeting_rooml meeting_room2 meeting_room3 | cafeteria presentation_lounge meeting_rooml | meeting_room2 meeting_room3 | Find the office of someone who is clearly a fan of Arnold Schwarzenegger. peters_office tobis_office f nikos_office chris_office wills office ajays_office # f There is postdoc that has a pet Husky. Find the desk that’s most likely theirs. postdoc_bayl | postdoc bay2 postdoc_bay1 postdoc_bay2 One of the PhD students was given more than one complimentary T-shirt. Find his desk. # phd_bay1 # phd_bay2 Find me the office where a paper attachment device is inside an asset that is open. peters_office tobis_office nikos_office wills_office nikos_office There is an office which has a cabinet containing exactly 3 items in it. Locate the office. peters_office tobis_office nikos_office michaels_office aarons_office jasons_office ajays_office { chris_office dimitys_office i i ffice dimitys office | laurdannes_office chris_office ajay_office
2307.06135#77
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
77
Split VLM CP Image Style Image Scene Image Emotion Image Quality OpenFlamingo 2.0% 0.0% 1.9% 0.0% 1.9% OpenFlamingo v2 5.0% 3.8% 3.8% 4.0% 3.7% MMGPT 20.8% 3.8% 21.0% 56.0% 13.0% MiniGPT-4 35.6% 15.1% 53.3% 32.0% 20.4% LLaVA 40.6% 30.2% 43.8% 70.0% 22.2% VisualGLM 47.3% 37.7% 73.3% 50.0% 0.0% PandaGPT 48.3% 30.2% 69.5% 62.0% 1.9% DEV InstructBLIP 49.0% 49.1% 65.7% 50.0% 5.6% MiniGPT-4-13B 50.7% 64.2% 61.9% 56.0% 3.7% InstructBLIP-13B 56.4% 73.6% 73.3% 58.0% 11.1% LLaMA-Adapter 56.4% 45.3% 75.2% 76.0% 3.7% µ-G2PT 58.1% 47.2%
2307.06281#77
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
77
Preprint input_length output_length understandability naturalness coherence reward mtld knn_6 PPL loss 65.443 47.868 39.208 36.990 38.769 31.387 51.952 117.624 42.481 63.936 47.116 52.896 39.859 58.227 41.489 77.980 50.570 71.903 61.888 66.920 45.947 63.585 46.150 44.748 88.870 63.822 65.275 29.802 25.489 79.026 100.707 35.745 19.414 88.069 91.381 95.122 41.671 47.999 61.965 86.434 72.861 60.589 36.427 56.222 54.002 67.621 73.698 52.490 18.467 76.520 30.908 37.615 82.065 45.438 29.915 57.175 57.073 84.803 64.189 96.898 69.502 50.454 45.293 121.402 42.257 90.690 31.113 70.559 29.458 92.657 35.635
2307.06290#77
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
78
There is an office containing a rotten apple. The cabinet name contains an even number. Locate the office. peters_office | tobis_office nikos_office michaels_office aarons_office ajays_office chris_office dimitys_ office } laurriannes_office f michaels_office nikos_office f dimitys office chris_office ajays_office jasons_office 25 Look for a carrot. The carrot is likley to be in a meeting room but I’m not sure. meeting_room1 meeting_room2 meeting_room3 meeting_room4 meeting_room1 meeting_room2 meeting_room3 Find me a meeting room with a RealSense camera. meeting_room1 meeting_room2 meeting_room3 meeting_room1 meeting_room2 meeting_room3 meeting_room4 meeting_room4 presentation lounge Find the closest fire extinguisher to the manipulation lab. # manipulation_lab Find me the closest meeting room to the kitchen. # ng_room3 Either Filipe or Tobi has my headphones. Locate them. # filipes_office # tobis_office # filipes office Table 14: Complex Search Office Environment Evaluation. Sequence of Explored Nodes for Complex Search Office Environment Instructions. 26 # Cc Fail
2307.06135#78
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
78
LLaMA-Adapter 56.4% 45.3% 75.2% 76.0% 3.7% µ-G2PT 58.1% 47.2% 80.0% 60.0% 20.4% mPLUG-Owl 64.1% 73.6% 72.4% 84.0% 16.7% Otter-I 65.4% 64.2% 84.8% 70.0% 16.7% Kosmos-2 72.5% 69.8% 82.9% 82.0% 40.7% Shikra 75.8% 62.3% 96.2% 86.0% 37.0% OpenFlamingo 1.5% 0.0% 4.6% 0.0% 1.2% OpenFlamingo v2 4.0% 1.2% 5.4% 3.6% 2.3% MMGPT 18.3% 3.5% 30.0% 35.7% 7.0% MiniGPT-4 28.3% 9.4% 46.2% 19.0% 7.0% LLaVA 40.4% 37.6% 52.3% 47.6% 29.1% InstructBLIP 41.1% 34.1% 61.5% 35.7% 0.0% VisualGLM
2307.06281#78
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
78
453.266 347.710 308.947 335.400 311.318 264.193 347.952 385.075 307.986 381.906 397.479 326.873 388.185 322.313 381.454 396.390 283.139 369.859 304.082 241.679 346.294 326.386 290.080 291.442 401.555 320.139 385.007 150.962 273.662 316.480 333.106 417.375 445.720 385.920 392.303 321.651 400.445 387.365 319.993 349.504 377.140 357.736 418.421 363.895 294.862 267.347 423.275 344.371 312.770 382.718 215.053 278.354 249.474 297.919 324.490 266.633 385.760 373.302 388.245 362.815 220.558 159.838 225.428 377.451 336.535 227.836 416.373 388.945 349.922 255.682 373.682
2307.06290#78
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
79
Find me a FooBar. kitchenO living_room0 Find me a bottle of wine. [wtteneno = [estenern = aintng soon | aintna scot) kitchend } >| dining_room2 |>| dining_room0 J> > kitchenl living_room0 living_room1 )> >| Find me a plant with thorns. { living_room0 J> living_room1 >| dining_room0 J>| dining_room2 bedroom } > dining _rooml >| living_room2 J>| bathroom0 Find me a plant that needs watering. { living_room0 J> living_room1 dining_room0 dining_room2 J> bedroom } >| bedroom > dining_rooml Find me a bathroom with no toilet. ~/attroomt | peensoonz 7 > bathroom2 > bathroom3 bathroom4 The baby dropped their rattle in one of the rooms. Locate it. »Hving_roono = nedroon > bedroom ne living_room0 > living_rooml >| Living_room2 |> { dining_room0 } >| dining_room1 | playroom0 Y > dining_room2 > bedroomd >| bedroom J>| bedroom2 I left my suitcase either in the bedroom
2307.06135#79
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
79
52.3% 47.6% 29.1% InstructBLIP 41.1% 34.1% 61.5% 35.7% 0.0% VisualGLM 41.5% 30.6% 68.5% 41.7% 0.0% TEST PandaGPT 47.9% 38.8% 66.2% 38.1% 17.4% µ-G2PT 49.8% 41.2% 75.4% 38.1% 20.9% LLaMA-Adapter 50.6% 31.8% 70.0% 44.0% 25.6% MiniGPT-4-13B 50.6% 62.4% 75.4% 38.1% 4.7% InstructBLIP-13B 53.8% 61.2% 71.5% 41.7% 5.8% mPLUG-Owl 59.4% 71.8% 75.4% 54.8% 17.4% Otter-I 60.6% 62.4% 83.1% 46.4% 4.7% Image Topic 8.3% 13.9% 8.3% 41.7% 33.3% 52.8% 63.9% 63.9% 61.1% 47.2% 69.4% 63.9% 69.4% 77.8%
2307.06281#79
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
79
0.778 0.785 0.778 0.754 0.766 0.762 0.780 0.873 0.760 0.798 0.772 0.788 0.740 0.803 0.761 0.807 0.784 0.794 0.811 0.813 0.779 0.806 0.764 0.788 0.825 0.802 0.800 0.785 0.748 0.817 0.846 0.726 0.729 0.825 0.825 0.830 0.762 0.768 0.791 0.822 0.794 0.792 0.727 0.784 0.792 0.827 0.803 0.772 0.751 0.808 0.750 0.769 0.834 0.773 0.758 0.788 0.758 0.803 0.775 0.846 0.832 0.810 0.798 0.870 0.771 0.870 0.724 0.807 0.749 0.857 0.732
2307.06290#79
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06290
80
0.718 0.738 0.737 0.698 0.721 0.725 0.724 0.826 0.712 0.744 0.725 0.746 0.686 0.759 0.717 0.755 0.739 0.741 0.773 0.780 0.735 0.762 0.719 0.743 0.774 0.753 0.740 0.766 0.711 0.769 0.792 0.660 0.675 0.770 0.774 0.781 0.711 0.722 0.745 0.766 0.739 0.740 0.660 0.727 0.749 0.788 0.745 0.722 0.711 0.756 0.713 0.733 0.798 0.725 0.710 0.746 0.705 0.748 0.713 0.801 0.804 0.796 0.769 0.817 0.726 0.841 0.665 0.754 0.711 0.820 0.666
2307.06290#80
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
81
Find the room with a ball in it. I forgot my book on a bed. Locate it. ~bedzomt > bedroom | a { bedroom >| bedroom 27 Find an empty vase that was left next to a sink. | bathroom0 bathroom1 | bathroom2 bathroom3 | kitchend kitchen1 | bathroom0 bathroom1 bathroom2 bathroom3 Locate the dining room which has a table, chair and a baby monitor. ED a Locate a chair that is not in any dining room. I need to shave. Which room has both a razor and shaving cream. bathroom0 bathroom) f bathroom? bathroom0 bathroom1 bathroom2 bathroom3 Find me 2 bedrooms with pillows in them. bedroom0 bedroom bedroom2 bedroom3 bedroom0 bedroom bedroom2 bedroom3 Find me 2 bedrooms without pillows in them. bedroom bedroom ] bedroom2 bedroom bedroom0 Table 15: Simple Search Home Environment Evaluation. Sequence of Explored Nodes for Sim- ple Search Home Environment Instructions. 28 # SayPlan # tna [TY success FT) I need something to access ChatGPT. Where should I go?. # home_office0 # home_office0 Find the livingroom that contains the most electronic devices. living_room0 living_room1 living_room0 living_room1 Find me something to eat with alot of potassium. Ke fehen? kitchenO
2307.06135#81
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06290
81
0.904 0.926 0.921 0.936 0.928 0.930 0.947 0.945 0.931 0.924 0.894 0.921 0.905 0.927 0.901 0.920 0.951 0.928 0.941 0.944 0.915 0.927 0.939 0.944 0.931 0.945 0.946 0.941 0.914 0.946 0.964 0.905 0.873 0.941 0.938 0.957 0.909 0.907 0.928 0.954 0.927 0.933 0.909 0.930 0.935 0.957 0.927 0.929 0.908 0.924 0.939 0.931 0.961 0.936 0.923 0.951 0.909 0.942 0.937 0.945 0.953 0.944 0.939 0.960 0.924 0.956 0.901 0.932 0.889 0.960 0.946
2307.06290#81
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
82
living_room0 living_room1 living_room0 living_room1 Find me something to eat with alot of potassium. Ke fehen? kitchenO I left a sock in a bedrooom and in one of the livingrooms. Locate them. They should match. kitchenO bedroom bedroom1 bedroom2 living_room0 | bedroom2 bedroom0 bedroom1 bedroom2 | bedroom3 | living_room0 Find the potted plant that is most likely a cactus. living_room0 living_room1 kitchenO living_room0 living_room1 Find the dining room with exactly 5 chairs. dining_room0 dining_rooml dining_room0 dining_room1 Find me the bedroom closest to the home office. home_officeO Find me the bedroom with an unusual amount of bowls. sezoa ses sesso sezon Which bedroom is empty. bedroom bedroom1 bedroom2 bedroom3 bedroom3 Which bathroom has the most potted plants. pres poe ema bathroom0 bathroom) bathroom? The kitchen is flooded. Find somewhere I can heat up my food. kitchen1 kitchenO 29 Find me the room which most likley belongs to a child.
2307.06135#82
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
82
Split VLM FP-S Object Localization Attribute Recognition Celebrity Recognition OpenFlamingo 6.7% 2.5% 10.4% 4.0% OpenFlamingo v2 8.1% 1.2% 14.3% 10.1% MMGPT 14.1% 1.2% 10.4% 27.3% PandaGPT 27.9% 14.8% 50.6% 23.2% MiniGPT-4 30.3% 7.4% 50.6% 35.4% InstructBLIP 37.0% 6.2% 51.9% 47.5% VisualGLM 43.8% 19.8% 41.6% 67.7% DEV LLaVA 45.5% 8.6% 64.9% 59.6% µ-G2PT 46.5% 11.1% 71.4% 63.6% Otter-I 46.8% 16.0% 61.0% 58.6% LLaMA-Adapter 47.5% 17.3% 68.8% 59.6% InstructBLIP-13B 47.8% 14.8% 55.8% 69.7% MiniGPT-4-13B 49.5% 28.4% 54.5% 68.7% mPLUG-Owl 50.2%
2307.06281#82
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
82
0.248 1.330 1.376 3.043 1.942 2.043 2.239 0.179 1.996 0.836 0.533 1.013 1.774 1.325 0.785 0.703 2.520 0.957 0.827 1.935 1.409 0.689 2.325 2.108 0.229 1.797 1.364 2.521 2.101 1.416 1.768 2.210 0.743 0.597 0.584 1.490 1.262 1.019 1.792 1.519 1.102 1.434 1.728 1.348 2.122 1.938 0.586 1.572 1.275 0.786 3.080 2.132 1.926 2.236 2.036 2.232 1.218 0.983 1.330 0.389 1.808 1.990 2.061 0.495 1.674 1.134 1.907 1.085 1.263 1.492 3.273
2307.06290#82
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
83
29 Find me the room which most likley belongs to a child. sesso nesses second seco secs besos 15 guests are arriving. Locate enough chairs to seat them. dining_room0 dining_room1 living_room0 home_officed bedroom dining_room0 dining_room1 dining_room2 living_room0 living_room1 A vegetarian dinner was prepared in one of the dining rooms. Locate it. dining room0 dining room dining room0 dining room My tie is in one of the closets. Locate it. Table 16: Complex Search Home Environment Evaluation. Sequence of Explored Nodes for Complex Search Home Environment Instructions. 30 # G Causal Planning Evaluation Results In this section, we provide a detailed breakdown of the causal planning performance of SayPlan across the two sets of evaluation instructions. Tables 17 and 18 detail the correctness, executability and the number of iterative replanning steps it took to obtain an executable plan.
2307.06135#83
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
83
69.7% MiniGPT-4-13B 49.5% 28.4% 54.5% 68.7% mPLUG-Owl 50.2% 18.5% 63.6% 70.7% Shikra 57.2% 32.1% 75.3% 63.6% Kosmos-2 64.3% 38.3% 71.4% 80.8% OpenFlamingo 2.5% 2.9% 3.0% 1.7% OpenFlamingo v2 5.5% 2.9% 7.0% 8.5% MMGPT 18.2% 3.8% 23.0% 28.0% PandaGPT 20.2% 7.6% 37.0% 19.5% MiniGPT-4 28.7% 8.6% 41.0% 29.7% InstructBLIP 33.0% 2.9% 41.0% 40.7% VisualGLM 35.8% 8.6% 40.0% 52.5% TEST µ-G2PT 41.8% 17.1% 56.0% 59.3% LLaVA 41.8% 13.3% 47.0% 50.8% LLaMA-Adapter 45.0% 15.2% 59.0% 56.8%
2307.06281#83
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
83
69.307 63.374 59.802 58.702 61.146 55.072 65.820 75.406 61.049 66.379 63.525 62.873 61.495 63.150 60.898 66.118 61.530 68.547 62.673 59.334 61.754 65.125 59.118 60.978 69.541 64.716 68.535 49.809 54.551 66.166 71.054 64.833 61.201 71.027 69.990 67.801 62.359 62.817 63.537 69.445 66.215 66.224 64.787 64.616 60.555 62.581 71.176 62.516 56.718 68.118 55.174 57.390 62.987 59.366 57.623 62.362 62.189 68.274 67.701 70.279 59.840 52.740 55.788 75.109 61.486 63.386 64.171 67.807 57.639 65.639 61.400
2307.06290#83
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
84
No. of Replanning Instruction Corr. Exec. . Iterations Close Jason’s cabinet. v v 0 Refrigerate the orange left on the kitchen bench. v v 0 Take care of the dirty plate in the lunchroom. v v 0 Place the printed document on Will’s desk. v v 0 Peter is working hard at his desk. Get him a healthy x v 5 snack. Hide one of Peter’s valuable belongings. v v 0 Wipe the dusty admin shelf. v v 0 There is coffee dripping on the floor. Stop it. v v 0 Place Will’s drone on his desk. v v 0 Move the monitor from Jason’s office to Filipe’s. v v 0 My parcel just got delivered! Locate it and place it in the V V 0 appropriate lab. Check if the coffee machine is working. v v 0 Heat up the chicken kebab. v v 1 Something is smelling in the kitchen. Dispose of it. v v 0 Throw what the agent is holding in the bin. v v 1 # No. of Replanning Iterations
2307.06135#84
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
84
LLaVA 41.8% 13.3% 47.0% 50.8% LLaMA-Adapter 45.0% 15.2% 59.0% 56.8% InstructBLIP-13B 45.8% 5.7% 45.0% 65.3% Otter-I 46.8% 11.4% 54.0% 57.6% MiniGPT-4-13B 49.8% 21.0% 64.0% 55.9% mPLUG-Owl 53.0% 16.2% 63.0% 78.0% Shikra 61.8% 27.6% 72.0% 70.3% Kosmos-2 65.6% 40.4% 59.0% 79.7% OCR 15.0% 5.0% 15.0% 22.5% 25.0% 45.0% 37.5% 47.5% 27.5% 52.5% 37.5% 45.0% 35.0% 37.5% 57.5% 62.5% 2.6% 2.6% 16.9% 16.9% 39.0% 51.9% 41.6% 29.9% 59.7% 49.4% 71.4% 68.8% 61.0% 51.9% 81.8% 86.8%
2307.06281#84
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
84
1.062 1.052 1.047 1.047 1.048 1.041 1.059 1.091 1.048 1.064 1.050 1.057 1.044 1.059 1.046 1.067 1.058 1.061 1.058 1.058 1.053 1.063 1.052 1.053 1.074 1.063 1.072 1.036 1.040 1.068 1.084 1.053 1.038 1.074 1.077 1.074 1.048 1.050 1.061 1.079 1.066 1.061 1.054 1.058 1.049 1.066 1.070 1.052 1.035 1.069 1.037 1.045 1.066 1.051 1.048 1.058 1.055 1.068 1.061 1.076 1.058 1.047 1.049 1.091 1.050 1.072 1.050 1.069 1.039 1.073 1.050
2307.06290#84
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06290
85
4.563 3.708 3.534 3.231 3.366 3.109 3.520 4.100 3.473 3.983 4.158 3.717 3.754 3.623 4.006 4.016 3.073 3.967 3.700 2.924 3.668 3.964 3.051 3.313 4.274 3.555 3.920 2.374 3.081 3.466 3.472 3.832 4.338 4.126 4.068 3.456 3.804 3.921 3.470 3.810 3.939 3.758 4.022 3.759 3.224 3.163 4.338 3.612 3.723 4.060 2.653 3.123 2.980 3.287 3.464 3.097 3.857 3.993 3.985 3.941 2.934 2.546 2.877 4.027 3.577 3.113 4.005 3.967 3.630 3.047 3.247 Table 21: Random experiment results 2. 22
2307.06290#85
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
86
Instruction Corr. Exec. No. of Replanning Iterations Heat up the noodles in the fridge, and place it somewhere where I can enjoy it. Throw the rotting fruit in Dimity’s office in the correct bin. Wash all the dishes on the lunch table. Once finished, place all the clean cutlery in the drawer. Safely file away the freshly printed document in Will’s office then place the undergraduate thesis on his desk. Make Niko a coffee and place the mug on his desk. Someone has thrown items in the wrong bins. Correct this. Tobi spilt soda on his desk. Throw away the can and take him something to clean with. I want to make a sandwich. Place all the ingredients on the lunch table. A delegation of project partners is arriving soon. We want to serve them snacks and non-alcoholic drinks. Prepare everything in the largest meeting room. Use items found in the supplies room only. Serve bottled water to the attendees who are seated in meeting room 1. Each attendee can only receive a single bottle of water. Empty the dishwasher. Place all items in their correct locations. Locate all 6 complimentary t-shirts given to the PhD stu-
2307.06135#86
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
86
Split VLM FP-C Spatial Relationship Attribute Comparison Action Recognition OpenFlamingo v2 1.4% 2.2% 2.3% 0.0% OpenFlamingo 2.8% 2.2% 0.0% 5.4% MMGPT 3.4% 2.2% 2.3% 5.4% MiniGPT-4 9.0% 0.0% 11.4% 14.3% InstructBLIP 21.4% 6.7% 11.4% 41.1% VisualGLM 23.4% 0.0% 31.8% 35.7% InstructBLIP-13B 24.8% 6.7% 36.4% 30.4% DEV MiniGPT-4-13B 26.2% 20.0% 20.5% 35.7% LLaVA 32.4% 6.7% 38.6% 48.2% PandaGPT 35.2% 11.1% 25.0% 62.5% LLaMA-Adapter 38.6% 11.1% 47.7% 53.6% Otter-I 38.6% 15.6% 4.5% 83.9% µ-G2PT 38.6% 6.7% 45.5% 58.9% mPLUG-Owl 40.7%
2307.06281#86
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06290
86
Table 21: Random experiment results 2. 22 0.996 0.980 0.979 0.978 0.976 0.978 0.985 1.004 0.981 0.987 0.998 0.986 0.999 0.993 0.991 0.991 0.972 0.988 0.984 0.974 0.990 0.998 0.975 0.977 0.982 0.982 1.000 0.978 0.966 0.973 0.995 0.961 0.985 1.001 0.992 1.002 0.998 1.016 0.994 0.990 0.999 0.991 0.975 0.986 0.978 0.986 1.000 0.979 0.975 0.997 0.958 0.969 0.974 0.976 0.978 0.972 0.981 1.001 0.980 0.995 0.975 0.978 0.972 1.004 0.997 0.997 0.985 0.989 0.990 0.980 0.980
2307.06290#86
Instruction Mining: When Data Mining Meets Large Language Model Finetuning
Large language models (LLMs) are initially pretrained for broad capabilities and then finetuned with instruction-following datasets to improve their performance in interacting with humans. Despite advances in finetuning, a standardized guideline for selecting high-quality datasets to optimize this process remains elusive. In this paper, we first propose InstructMining, an innovative method designed for automatically selecting premium instruction-following data for finetuning LLMs. Specifically, InstructMining utilizes natural language indicators as a measure of data quality, applying them to evaluate unseen datasets. During experimentation, we discover that double descent phenomenon exists in large language model finetuning. Based on this observation, we further leverage BlendSearch to help find the best subset among the entire dataset (i.e., 2,532 out of 100,000). Experiment results show that InstructMining-7B achieves state-of-the-art performance on two of the most popular benchmarks: LLM-as-a-judge and Huggingface OpenLLM leaderboard.
http://arxiv.org/pdf/2307.06290
Yihan Cao, Yanbin Kang, Chi Wang, Lichao Sun
cs.CL, cs.AI, cs.LG
22 pages, 7 figures
null
cs.CL
20230712
20231027
[ { "id": "1905.07830" }, { "id": "1803.05457" }, { "id": "2304.03277" }, { "id": "2306.11644" }, { "id": "2211.05100" }, { "id": "2109.01652" }, { "id": "2305.11206" }, { "id": "2210.11416" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2212.10560" } ]
2307.06135
87
a single bottle of water. Empty the dishwasher. Place all items in their correct locations. Locate all 6 complimentary t-shirts given to the PhD stu- dents and place them on the shelf in admin. I’m hungry. Bring me an apple from Peter and a Pepsi from Tobi. I’m at the lunch table. Let’s play a prank on Niko. Dimity might have some- thing. There is an office which has a cabinet containing a rotten apple. The cabinet name contains an even number. Lo- cate the office, throw away the fruit and get them a fresh apple. NN O*®N NOOR ONN \ N\ Ne NON NN NNN NON SN N\ en 2 1
2307.06135#87
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
87
4.5% 83.9% µ-G2PT 38.6% 6.7% 45.5% 58.9% mPLUG-Owl 40.7% 13.3% 27.3% 73.2% Kosmos-2 49.0% 31.1% 56.8% 57.1% Shikra 57.9% 33.3% 45.5% 87.5% OpenFlamingo v2 0.8% 0.0% 0.0% 2.3% OpenFlamingo 1.6% 1.2% 3.8% 0.0% MMGPT 5.2% 3.7% 3.8% 8.0% MiniGPT-4 11.2% 9.8% 6.2% 17.0% VisualGLM 17.6% 7.3% 8.8% 35.2% MiniGPT-4-13B 19.6% 17.1% 8.8% 31.8% LLaVA 20.0% 14.6% 21.2% 23.9% TEST PandaGPT 20.4% 12.2% 15.0% 33.0% InstructBLIP-13B 24.0% 9.8% 17.5% 43.2% InstructBLIP 24.4% 9.8% 2.5% 58.0%
2307.06281#87
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
88
# No. of Replanning Iterations Table 18: Correctness, Executability and Number of Replanning Iterations for Long-Horizon Planning Instructions. Evaluating the performance of SayPlan on each long-horizon planning instruction. Values indicated in red indicate that no executable plan was identified up to that number of iterative replanning steps. In this case, 5 was the maximum number of replanning steps. The full plan sequences generated by SayPlan and all the baseline methods for each of the above instructions are detailed in Table 19. Note the regions highlighted in red indicating the precise action where a plan failed. - Full listings of the generated plan sequences for the simple and long-horizon instruction sets are provided on the following pages 32 (or ee QD sesctior EE) sees FEE) Close Jason's cabinet. Refrigerate the orange left on the kitchen bench. Take care of the dirty plate in the lunchroom. iE Ee i ec Place the printed document on Will's desk. 33 # [| SayPlan [ # LLM+P # [| LLM-As-Planner CC) Success Cj Fail Peter is working hard at his desk. Get him a healthy snack. EEE Oe Hide one of Peter's valuable belongings. eee ee > > [ etesserptoney | (__etosercabinetay 7 Wipe the dusty admin shelf.
2307.06135#88
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06135
89
EEE Oe Hide one of Peter's valuable belongings. eee ee > > [ etesserptoney | (__etosercabinetay 7 Wipe the dusty admin shelf. EE | | v v v v v v v v v v v v v v v v Y v Y v v v v v v v # v There is coffee dripping on the floor. Stop it. > v v v v Y Vv Vv v v v v v v v v 34 # (i sor # LLM+P # | LLM-As-Planner # | Success Place Will's drone on his desk. Move the monitor from Jason's office to Filipe's. > > SC My parcel just got delivered! Locate it and place it in the appropriate lab. EEE EL a) Check if the coffee machine is working. > > 35 | Fail # SayPlan # LLM+P # LLM-As-Planner # Success Heat up the chicken kebab. Something is smelling in the kitchen. Dispose of it. A A A A A Kk AK A AA Throw what the agent is holding in the bin. 36 Fail # [| SayPlan [| LLM+P # [] LLM-As-Planner C # Success CC) Fail Heat up the noodles in the fridge, and place it somewhere where I can enjoy it.
2307.06135#89
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06135
90
# [] LLM-As-Planner C # Success CC) Fail Heat up the noodles in the fridge, and place it somewhere where I can enjoy it. v | v | v | v | v v v Vv v Vv v v v v v ¥ v v v v v v v v v v Vv v v Vv Vv v v v v v Vv v v Y v Y v Throw the rotting fruit in Dimity's office in the correct bin. Y v v v Y Vv Vv Vv v v v | | v v v v v v v v v v Y v v Y v v # v 37 [| SayPlan [ LLM+P [| LLM-As-Planner CC) Success CL) Wash all the dishes on the lunch table. Once finished, place all the clean cutlery in the drawer. Y > > v [| goto(pese23) » | access(lunch table) v v v ‘cess irawe v v v v v v v v v Y v Y Vv Y v v v v v v v v v v v v v vy v v Y v v Vv Y Y Yv v v v v v v v v v v v v v v Safely file away the freshly printed document in Will's office then place the undergraduate thesis on his desk.
2307.06135#90
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
90
Split VLM AR Physical Property Reasoning Function Reasoning Identity Reasoning OpenFlamingo 8.0% 10.7% 8.6% 2.2% OpenFlamingo v2 15.4% 14.7% 8.6% 28.9% MMGPT 26.4% 24.0% 9.9% 60.0% MiniGPT-4 31.3% 14.7% 19.8% 80.0% LLaMA-Adapter 35.3% 16.0% 32.1% 73.3% µ-G2PT 38.8% 20.0% 38.3% 71.1% PandaGPT 38.8% 16.0% 46.9% 62.2% DEV VisualGLM 44.3% 18.7% 50.6% 75.6% InstructBLIP 46.3% 17.3% 51.9% 84.4% LLaVA 48.3% 25.3% 53.1% 77.8% MiniGPT-4-13B 50.7% 30.7% 49.4% 86.7% mPLUG-Owl 53.2% 18.7% 66.7% 86.7% InstructBLIP-13B 54.2% 30.7% 56.8% 88.9% Kosmos-2 55.7% 33.3%
2307.06281#90
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
91
Safely file away the freshly printed document in Will's office then place the undergraduate thesis on his desk. v v v v Y v v Y¥ Y v v v v v Vv v v v v v v Y v Y Y v v v v v v Vv v Vv Y v v v v 38 # Fail # SayPlan # LLM+P # LLM-As-Planner # Success Make Niko a coffee and place the mug on his desk. Someone has thrown items in the wrong bins. Correct this. 39 Fail # SayPlan # LLM-As-Planner # Success Tobi spilt soda on his desk. Throw away the can and take him something to clean with. 40 Fail # SayPlan # LLM+P # LLM-As-Planner # Success I want to make a sandwich. Place all the ingredients on the lunch table. 41 Fail # [| SayPlan [| LLM+P [] LLM-As-Planner # C7) Success Cc) Empty the dishwasher. Place all items in their correct locations > bd v v v v Y v v v v v ¥ v v v v > v v v Vv v > v Y v v v Yv v Vv v v v v ¥ v v v v Yv # Yv
2307.06135#91
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
91
86.7% InstructBLIP-13B 54.2% 30.7% 56.8% 88.9% Kosmos-2 55.7% 33.3% 56.8% 91.1% Otter-I 56.7% 29.3% 61.7% 93.3% Shikra 56.7% 30.7% 63.0% 88.9% OpenFlamingo 11.4% 14.0% 9.3% 11.0% OpenFlamingo v2 12.8% 9.0% 7.5% 24.4% MMGPT 23.9% 13.0% 12.1% 52.4% MiniGPT-4 32.9% 13.0% 29.9% 61.0% PandaGPT 41.5% 15.0% 42.1% 73.2% µ-G2PT 46.7% 31.0% 38.3% 76.8% LLaMA-Adapter 47.4% 25.0% 44.9% 78.0% TEST InstructBLIP 47.4% 17.0% 52.3% 78.0% VisualGLM 48.8% 26.0% 44.9% 81.7% LLaVA 53.6% 35.0% 48.6% 82.9% mPLUG-Owl
2307.06281#91
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
92
# Yv A delegation of project partners is arriving soon. We want to serve them snacks and non-alcoholic drinks. Prepare everything in the largest meeting room. Use items found in the supplies room only. Y v v Vv v v v Y v v v v Vv v v v v v v Yv v v v v v v Vv v ¥ v v v v v v v v v v v v v v v v v v v v v v 42 Fail Success rm ee ese Serve bottled water to the attendees who are seated in meeting room 1. Each attendee can only receive a single bottle of water. 43 Fail Success (I sean HG ese Dues ame Locate all 6 complimentary t-shirts given to the PhD students and place them on the shelf in admin. Ry Ser See ee Ser eer nr iy nr Wy ny nr ny ny ny ny ny ny YC YG Wy YG Wy CY WG CG YW WD 44 Fail [| SayPlan [| LLM+P [ LLM-As-Planner Cj Success Cc) I'm hungry. Bring me an apple from Peter and a pepsi from Tobi. I'm at the lunch table.
2307.06135#92
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06135
93
I'm hungry. Bring me an apple from Peter and a pepsi from Tobi. I'm at the lunch table. v v v v v v v v v v v v v v v v v v Y v v v v v v v v v v v v v v v > v Let's play a prank on Niko. Dimity might have something. Y Yv v v v v v v v v v v v Y v v v v v v v v Y Y ¥ v v | | v v v v Vv v v v Table 19: Causal Planning Evaluation. Task planning action sequences generated for a mobile manipulator robot to follow for both the simple and long-horizon planning instruction sets. 45 # Fail # H Scalability Ablation Study In this study, we evaluate the ability of SayPlan and the underlying LLM to reason over larger-scale scene graphs. More specifically, as SayPlan’s initial input is a collapsed 3DSG, we explore how increasing the number of nodes in this base environment impacts the ability of the LLM to attend to the relevant parts of the scene graph for both semantic search and iterative replanning.
2307.06135#93
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06135
94
@ Success @ Failure “Find me a carrot." “Find me a book that was left next to a robotic gripper." Simple Search “Find me a ripe banana." @ C) @ Joa should be kant ( } @ @ 30 below 0 degree Celsius.” “Find me something non-vegetarian." Complex Search “There is postdoc who has a pet Husky. Find their desk." 60 100 200 300 Base Environment Size (Number of Room Nodes) Figure 7: Evaluating the performance of the underlying LLMs semantic search capabilities as the scale of the environment increases. For the office environment used in this study, we are primarily interested in the number of room nodes present in the collapsed form of the 3DSG. @ Success @ failure "Close Jason's cabinet." “Hide one of Peter's valuable belongings." Simple Planning “Something is smelling in the kitchen. Dispose of it." the fridge, and place it somewhere where | can enjoy it." “Let's play a prank on Niko Dimity might have something." “Tobi spilt soda on his desk. Throw away the can and take him something to clean with." Long-Horizon Planning a 3 @ CD C) “Heat up the noodles in e @ C) 30 100 200 300 Base Environment Size (Number of Room Nodes)
2307.06135#94
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
94
Split VLM LR Structuralized Image-Text Understanding Future Prediction MMGPT 2.5% 2.6% 2.4% OpenFlamingo v2 4.2% 5.1% 2.4% OpenFlamingo 6.7% 9.0% 2.4% MiniGPT-4 7.5% 5.1% 11.9% PandaGPT 10.0% 10.3% 9.5% VisualGLM 10.8% 12.8% 7.1% LLaMA-Adapter 11.7% 7.7% 19.0% DEV µ-G2PT 13.3% 10.3% 19.0% InstructBLIP 14.2% 14.1% 14.3% mPLUG-Owl 16.7% 10.3% 28.6% LLaVA 16.7% 17.9% 14.3% InstructBLIP-13B 19.2% 19.2% 19.0% MiniGPT-4-13B 20.8% 20.5% 21.4% Shikra 25.8% 16.7% 42.9% Otter-I 32.5% 20.5% 54.8% Kosmos-2 46.7% 43.6% 52.4% MMGPT 1.1% 0.0% 2.7%
2307.06281#94
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
95
Figure 8: Evaluating the performance of SayPlan’s causal planning capabilities as the scale of the environment increases. For the office environment used in this study, we are primarily interested in the number of room nodes present in the collapsed form of the 3DSG. We note here that all the failures that occurred across both semantic search and iterative replanning were a result of the LLM’s input exceeding the maximum token limits – in the case of GPT-4 this corresponded to 8192 tokens. With regard to the scalability to larger environments, this is an important observation as it indicates that the LLM’s reasoning capabilities or ability to attend to the relevant parts of the 3DSG is not significantly impacted by the presence of ”noisy” or increasing number of nodes. One potential downside to larger environments however is the increased number of steps required before semantic search converges. As more semantically relevant floor or room nodes enter the scene, each one of these may be considered by the LLM for exploration. 46 # I Real World Execution of a Generated Long Horizon Plan.
2307.06135#95
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
95
20.5% 54.8% Kosmos-2 46.7% 43.6% 52.4% MMGPT 1.1% 0.0% 2.7% OpenFlamingo 9.1% 1.0% 20.3% VisualGLM 11.4% 3.9% 21.6% OpenFlamingo v2 11.4% 2.9% 23.0% LLaMA-Adapter 13.1% 11.8% 14.9% MiniGPT-4 13.6% 3.9% 27.0% µ-G2PT 14.8% 6.9% 25.7% TEST PandaGPT 15.3% 6.9% 27.0% LLaVA 15.9% 13.7% 18.9% InstructBLIP-13B 17.0% 5.9% 32.4% MiniGPT-4-13B 17.0% 6.9% 31.1% mPLUG-Owl 19.9% 5.9% 39.2% InstructBLIP 21.6% 4.9% 44.6%
2307.06281#95
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]
2307.06135
96
(Tenoye03) eseerer (texsep)sseooe | | (pkeqSopysod)o306 | | (yesodjo306 | | (sesodyo30B | | (,esod)o306 | [ escayose5 | (etesod) 0305 | [ (presod) 0305 (ezesod) 0305 | (yzes0d)o305 | [ (yzesod)ox05 | { (teno3€0a)dnyord (yousq woyoa7%) sse03e | [ Grewar99 0206 | | (szesod) 0305 | | (pzesod)oa06 | | (ezesod)ox05 | | (ytesodjoso5 | | (eresod)ox06 | | (gesodyoao5 | | (,es0d)oa06 | | (sesod)oa05 | [\ (ueseposyeseetex | [ (ues ysex3)sseooe | [ (gues epos)dnxord (1exsep)sseooe | | (pheq Sopysod)o306 | [ (pesodjo306 | { (gesod)o308 [ Wesedyeze6 | (gasod) 0306 :ueTd pezereuey
2307.06135#96
SayPlan: Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning
Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.
http://arxiv.org/pdf/2307.06135
Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, Niko Suenderhauf
cs.RO, cs.AI
Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
null
cs.RO
20230712
20230927
[ { "id": "2204.00598" }, { "id": "2210.05359" }, { "id": "2304.11477" }, { "id": "2302.04761" }, { "id": "2210.03629" }, { "id": "2207.05608" }, { "id": "2201.11903" }, { "id": "2303.14100" }, { "id": "2302.05128" }, { "id": "2302.12813" }, { "id": "2304.11116" }, { "id": "2212.04088" } ]
2307.06281
97
Split VLM RR Social Relation Physical Relation Nature Relation OpenFlamingo 0.0% 0.0% 0.0% 0.0% OpenFlamingo v2 0.9% 0.0% 0.0% 2.1% MiniGPT-4 4.3% 2.3% 8.3% 4.2% MMGPT 13.0% 14.0% 0.0% 18.8% InstructBLIP 22.6% 34.9% 8.3% 18.8% PandaGPT 23.5% 20.9% 8.3% 33.3% LLaMA-Adapter 29.6% 37.2% 16.7% 29.2% DEV MiniGPT-4-13B 30.4% 53.5% 8.3% 20.8% LLaVA 30.4% 37.2% 12.5% 33.3% InstructBLIP-13B 34.8% 55.8% 8.3% 29.2% VisualGLM 35.7% 62.8% 8.3% 25.0% µ-G2PT 40.9% 60.5% 16.7% 35.4% Kosmos-2 43.5% 76.7% 29.2% 20.8% mPLUG-Owl 47.8% 69.8%
2307.06281#97
MMBench: Is Your Multi-modal Model an All-around Player?
Large vision-language models have recently achieved remarkable progress, exhibiting great perception and reasoning abilities concerning visual information. However, how to effectively evaluate these large vision-language models remains a major obstacle, hindering future model development. Traditional benchmarks like VQAv2 or COCO Caption provide quantitative performance measurements but suffer from a lack of fine-grained ability assessment and non-robust evaluation metrics. Recent subjective benchmarks, such as OwlEval, offer comprehensive evaluations of a model's abilities by incorporating human labor, but they are not scalable and display significant bias. In response to these challenges, we propose MMBench, a novel multi-modality benchmark. MMBench methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element is a meticulously curated dataset that surpasses existing similar benchmarks in terms of the number and variety of evaluation questions and abilities. The second element introduces a novel CircularEval strategy and incorporates the use of ChatGPT. This implementation is designed to convert free-form predictions into pre-defined choices, thereby facilitating a more robust evaluation of the model's predictions. MMBench is a systematically-designed objective benchmark for robustly evaluating the various abilities of vision-language models. We hope MMBench will assist the research community in better evaluating their models and encourage future advancements in this domain. Project page: https://opencompass.org.cn/mmbench.
http://arxiv.org/pdf/2307.06281
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin
cs.CV, cs.CL
null
null
cs.CV
20230712
20230813
[ { "id": "2302.13971" }, { "id": "2306.15195" }, { "id": "2305.03726" }, { "id": "2304.10592" }, { "id": "2106.09685" }, { "id": "2301.12597" }, { "id": "1504.00325" }, { "id": "2306.14824" }, { "id": "2305.16355" }, { "id": "2305.08322" }, { "id": "2111.02114" }, { "id": "2304.14178" }, { "id": "2304.15010" }, { "id": "2305.06500" }, { "id": "2304.08485" } ]