doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.09212
127
29.9 / 27.4 52.5 / 51.2 62.6 / 59.5 55.9 / 47.1 52.3 / 56.1 61.6 / 64.7 39.2 / 34.3 60.3 / 54.2 41.7 / 36.7 44.8 / 42.1 50.5 / 48.0 47.4 / 45.2 49.0 / 53.8 40.5 / 38.8 48.1 / 44.9 47.3 / 48.5 40.5 / 42.6 44.7 / 41.0 63.2 / 54.4 66.0 / 63.0 47.6 / 46.2 52.4 / 42.9 84.6 / 77.1 51.0 / 43.4 33.0 / 34.0 50.3 / 47.9 52.6 / 46.4 66.4 / 56.1 52.8 / 47.8 58.9 / 57.7 47.5 / 52.5 52.9 / 51.7 62.4 / 59.0 55.0 / 54.4 57.7 / 56.6 58.2 / 59.1 51.7 / 51.7 60.7 / 62.2 49.1 / 46.0 69.4 / 61.3 42.6 / 46.3 51.7 / 52.3 41.1 /
2306.09212#127
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09212
129
Under review Table 11: Zero-shot accuracy of models. We report macro average accuracy over subjects within each category. “Overall” = macro average score over all subjects. “State” indicates whether the model is pre-trained (Base) or Fine-tuned to follow instructions (Chat). ‘*’ indicate there are both Base and Chat model released, we choose the one with better overall accuracy. The first block is multilingual- or English-oriented models, and the second block is Chinese-oriented models. To save space, we didn’t present models with an overall score lower than 30.
2306.09212#129
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09212
130
Model State STEM Humanities Social Science Other China-specific Overall GPT4 ChatGPT LLaMA2-70B* BLOOMZ-7B Falcon-40B LLaMA2-13B* LLaMA-65B BXLLaMA-30B LLaMA-30B BXLLaMA-13B Chat Chat Base Chat Base Chat Base Chat Base Chat 63.13 44.80 40.23 33.03 31.11 31.57 31.09 28.79 30.02 26.46 69.19 53.61 53.41 45.74 41.30 37.89 34.45 32.61 31.87 29.36 70.26 54.22 50.10 45.74 40.87 38.10 36.05 31.65 31.51 31.81 73.16 59.95 52.91 46.25 40.61 39.00 37.94 34.22 32.90 31.55 63.47 49.74 45.16 41.58 36.05 35.44 32.89 31.47 29.64 29.17 68.89 53.22 48.87 42.80 38.50 36.60 34.88 31.69 31.54 30.06 Baichuan2-13B* Xverse-13B* InternLM-20B* Baichuan-13B* InternLM-7B*
2306.09212#130
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09212
131
31.54 30.06 Baichuan2-13B* Xverse-13B* InternLM-20B* Baichuan-13B* InternLM-7B* ChatGLM2-6B BatGPT-15B Baichuan-7B ChatGLM-6B Base Base Chat Base Base Chat Chat Base Chat 47.59 43.42 43.68 41.63 43.04 42.98 43.15 32.79 32.54 65.57 60.51 61.78 60.26 56.72 52.42 50.91 44.43 42.91 65.24 60.65 58.19 59.62 56.96 52.56 52.66 46.83 44.91 65.47 64.20 57.54 56.15 54.50 52.15 52.23 44.79 42.29 62.10 56.69 55.26 56.03 54.55 49.38 49.09 43.19 42.08 60.88 57.04 55.06 54.40 52.83 50.01 49.81 42.35 40.80 Random – 25.00 25.00 25.00 25.00 25.00 25.00
2306.09212#131
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.09212
133
Model STEM Humanities Social Science Other China-specific Overall Baichuan2-13B-Chat BatGPT-15B-sirius ChatGLM-6B ChatGLM2-6B ChatGPT InternLM-Chat-20B Xverse-13B-Chat 42.7 (-2.5) 34.7 (-3.5) 29.9 (-2.3) 42.6 (+0.1) 46.6 (+1.4) 32.3 (-9.8) 30.5 (-9.6) 57.7 (-6.3) 44.2 (-2.6) 37.9 (-4.8) 52.3 (+0.3) 52.5 (-1.0) 48.1 (-10.7) 40.2 (-16.1) 56.0 (-8.0) 45.8 (-2.2) 39.6 (-4.6) 51.3 (-0.9) 54.0 (-0.3) 48.1 (-9.8) 43.0 (-14.3) 55.4 (-6.6) 46.6 (-1.2) 36.2 (-6.1) 51.6 (-0.3) 58.0 (-2.0) 44.6 (-11.0) 42.8 (-15.3) 53.8 (-7.7) 43.6
2306.09212#133
CMMLU: Measuring massive multitask language understanding in Chinese
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities. We conduct a thorough evaluation of 18 advanced multilingual- and Chinese-oriented LLMs, assessing their performance across different subjects and settings. The results reveal that most existing LLMs struggle to achieve an average accuracy of 50%, even when provided with in-context examples and chain-of-thought prompts, whereas the random baseline stands at 25%. This highlights significant room for improvement in LLMs. Additionally, we conduct extensive experiments to identify factors impacting the models' performance and propose directions for enhancing LLMs. CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
http://arxiv.org/pdf/2306.09212
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
cs.CL
null
null
cs.CL
20230615
20240117
[ { "id": "2302.13971" }, { "id": "2304.12986" }, { "id": "2307.00360" }, { "id": "2211.09110" }, { "id": "2305.08322" }, { "id": "2307.15020" }, { "id": "2307.09288" }, { "id": "2305.15011" }, { "id": "2303.08774" }, { "id": "2306.01116" }, { "id": "2304.08177" }, { "id": "2305.10263" } ]
2306.08568
0
3 2 0 2 n u J 4 1 ] L C . s c [ 1 v 8 6 5 8 0 . 6 0 3 2 : v i X r a # WizardCoder: Empowering Code Large Language Models with Evol-Instruct # Ziyang Luo2∗ Can Xu1∗ Pu Zhao1 Qingfeng Sun1 Xiubo Geng1 Wenxiang Hu1 Chongyang Tao1 Jing Ma2 Qingwei Lin1 Daxin Jiang1† 1Microsoft 2Hong Kong Baptist University {caxu,puzhao,qins,xigeng,wenxh,chongyang.tao,qlin,djiang}@microsoft.com {cszyluo, majing}@comp.hkbu.edu.hk # Abstract
2306.08568#0
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08302
1
Abstract—Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolve by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
2306.08302#1
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
1
# Abstract Code Large Language Models (Code LLMs), such as StarCoder, have demon- strated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine- tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS- 1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic’s Claude and Google’s Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM. # Introduction
2306.08568#1
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08651
1
Abstract: Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the “tidying.” How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and actively gather information from the environment that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MESSYSURFACES dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average
2306.08651#1
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
2
enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
2306.08302#2
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
2
# Introduction Recently, Large Language Models (LLMs) [1–9] have garnered significant attention and demonstrated impressive success. Notably, OpenAI’s ChatGPT stands out as a prominent example. Leveraging extensive pre-training on vast amounts of internet data and further fine-tuning with detailed instruction data [10], these models have achieved state-of-the-art (SOTA) zero-shot performance across diverse tasks. This trend is also observed in the domain of code understanding and generation. Numerous Code LLMs [11–18] have been proposed to tackle the challenges associated with code-related tasks. These Code LLMs undergo pre-training using substantial amounts of code data, enabling them to excel in various code-related tasks, showcasing impressive performance.
2306.08568#2
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
2
Recent research on Large Language Models (LLMs) has led to remarkable ad- vancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language
2306.08640#2
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
2
images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MESSYSURFACES benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning/.
2306.08651#2
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
3
Index Terms—Natural Language Processing, Large Language Models, Generative Pre-Training, Knowledge Graphs, Roadmap, Bidirectional Reasoning. ✦ # 1 INTRODUCTION Large language models (LLMs)1 (e.g., BERT [1], RoBERTA [2], and T5 [3]), pre-trained on the large-scale corpus, have shown great performance in various natural language processing (NLP) tasks, such as question answering [4], machine translation [5], and text generation [6]. Recently, the dramatically increasing model size further enables the LLMs with the emergent ability [7], paving the road for applying LLMs as Artificial General Intelligence (AGI). Advanced LLMs like ChatGPT2 and PaLM23, with billions of parameters, exhibit great potential in many complex practical tasks, such as education [8], code generation [9] and recommendation [10]. Knowledge Graphs (KGs) Cons: Pros: + Implicit Knowledge * Structural Knowledge + Hallucination + Accuracy + Indecisiveness a ™~ + Decisiveness + Black-box + Interpretability + Lacking Domain- + Domain-specific Knowledge specific/New Knowledge + Evolving Knowledge Pros: Cons: + Incompleteness * Lacking Language Understanding + Unseen Facts + General Knowledge + Language Processing * Generalizability Large Language Models (LLMs)
2306.08302#3
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
3
In contrast to most previous Code LLMs that primarily emphasize the pre-training process, there has been limited exploration of fine-grained instruction tuning in the Code domain. The introduction of instruction tuning initially aimed to enhance the generalization capabilities of LMs across different tasks [19–25]. OpenAI’s InstructGPT [10], for instance, involved soliciting human annotators to provide explicit instructions to ensure alignment with users’ intentions. Similarly, recent works such as Alpaca [26] employed the self-instruct [27] method, where ChatGPT generated the instruction data. Vicuna [28] utilized user-shared conversations collected from ShareGPT.com. WizardLM [29] introduced the Evol-Instruct method, which involved evolving existing instruction data to generate more complex and diverse datasets. However, it is worth noting that all these approaches primarily focused on the general domain and lacked specific design considerations for the code domain. ∗ Equal contribution. Work done during the internship at Microsoft. † Corresponding author. Preprint. Under review.
2306.08568#3
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
3
segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
2306.08640#3
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
3
Keywords: Social Reasoning, Human-Robot Interaction # 1 Introduction Imagine you are asked to clean up a desk and you see a meticulously constructed Lego sports car on it. You might immediately recognize that the socially appropriate behavior is to leave the car be, rather than taking it apart and putting it away as part of the “cleaning”. But how would a robot in that same position know that’s the right thing to do? Traditionally, we would expect this information to be specified in the robot’s objective – either learned from demonstrations [1, 2, 3] or from human feedback [4, 5, 6, 7, 8]. However, Lego sports cars are not common, and it is challenging for humans to specify a priori what objects a robot might encounter [9, 10]. While a robot could expensively query a human for what to do during these circumstances, we explore a different question in this work: how can we enrich robots with the social commonsense reasoning necessary to know what to do, without any human intervention?
2306.08651#3
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
4
= Shirui Pan is with the School of Information and Communication Tech- nology and Institute for Integrated and Intelligent Systems (IIIS), Griffith University, Queensland, Australia. Email: [email protected]; Linhao Luo and Yufei Wang are with the Department of Data Sci- ence and AI, Monash University, Melbourne, Australia. E-mail: lin- [email protected], [email protected]. • Chen Chen is with the Nanyang Technological University, Singapore. Email: [email protected]. Jiapu Wang is with the Faculty of Information Technology, Beijing Uni- versity of Technology, Beijing, China. E-mail: [email protected]. • Xindong Wu is with the Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), Hefei University of Tech- nology, Hefei, China, and also with the Research Center for Knowledge Engineering, Zhejiang Lab, Hangzhou, China. Email: [email protected]. Shirui Pan and Linhao Luo contributed equally to this work.
2306.08302#4
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
4
∗ Equal contribution. Work done during the internship at Microsoft. † Corresponding author. Preprint. Under review. Motivated by the Evol-Instruct method, this study aims to enhance the capabilities of the SOTA open- source Code LLM, StarCoder [11], by generating intricate code instruction data through code-specific Evol-Instruct. To achieve this, we have made several adaptations to the evolutionary prompt process tailored specifically for code-related tasks. These modifications include refining the evolutionary instructions, simplifying the form of evolutionary prompts, and incorporating code debugging and time-space complexity constraints. Initially, our method is applied to evolve the basic code instruction data, Code Alpaca [30]. Subsequently, we conduct fine-tuning of StarCoder using our newly created code instruction-following training set and obtain our WizardCoder.
2306.08568#4
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
4
# 1 Introduction Large language models (LLMs) [1–4], especially ChatGPT [5], have made remarkable progress in recent months, significantly advancing the field of developing AI assistants. Despite these advances, a single LLM serving as an AI assistant still exhibits inherent limitations in certain abilities, such as understanding visual environments and comprehending complex tasks, which restrict their utility in real-world applications. To address these shortcomings, a promising solution is to explore the integration and collaboration of multiple domain experts e.g., pretrained models or APIs, to tackle complex tasks. Numerous efforts have been made in this direction. Some works [6–8] utilize language as a bridge and transform the visual input into pure texts using foundational visual models, such as captioner [9–11], object detectors [12–14], and OCR models [15, 16]. Subsequently, the extracted texts are fed into LLMs for reasoning tasks like question-answering. Nonetheless, as for complex ∗Corresponding author.
2306.08640#4
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
4
Recent work has demonstrated that large language models (LLMs) trained on internet data have enough context for commonsense reasoning [11], making moral judgements [12, 13], or acting as a proxy reward function capturing human preferences [14]. Rather than explicitly asking a human for the answer, the robot could instead ask an LLM whether it would be appropriate to clean up the car. But in real-world environments, this is easier said than done. Tapping into an LLM’s social reasoning skills in the real-world requires the ability to ground language in the robot’s perception of the world – an ability that might be afforded by powerful vision-and-language models (VLMs). Unfortunately, we find that today’s VLMs cannot reliably provide all the relevant information for social reasoning. For instance, a VLM may not describe that the sports car is constructed from Legos, or that it contains over 1000 pieces – details that are key to making decisions. While advanced multi-modal models might alleviate this problem, a fundamental limitation is the image itself might not contain all the relevant information. If the sports car is partially occluded by a bag (as in Fig. 1), no VLM could provide the necessary context for reasoning over what actions to take. Such a
2306.08651#4
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
5
• Corresponding Author: Xindong Wu. 1. LLMs are also known as pre-trained language models (PLMs). 2. https://openai.com/blog/chatgpt 3. https://ai.google/discover/palm2 Fig. 1. Summarization of the pros and cons for LLMs and KGs. LLM pros: General Knowledge [11], Language Processing [12], Generaliz- ability [13]; LLM cons: Implicit Knowledge [14], Hallucination [15], In- decisiveness [16], Black-box [17], Lacking Domain-specific/New Knowl- edge [18]. KG pros: Structural Knowledge [19], Accuracy [20], Decisive- ness [21], Interpretability [22], Domain-specific Knowledge [23], Evolv- ing Knowledge [24]; KG cons: Incompleteness [25], Lacking Language Understanding [26], Unseen Facts [27]. Pros. and Cons. are selected based on their representativeness. Detailed discussion can be found in Appendix A. Despite their success in many applications, LLMs have been criticized for their lack of factual knowledge. Specif- ically, LLMs memorize facts and knowledge contained in the training corpus [14]. However, further studies reveal that LLMs are not able to recall facts and often experience hallucinations by generating statements that are factually
2306.08302#5
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
5
The experimental results obtained from four code generation benchmarks, namely HumanEval [31], HumanEval+ [32], MBPP [33], and DS-100 [34], demonstrate that our WizardCoder outperforms all other open-source Code LLMs, achieving state-of-the-art (SOTA) performance. Specifically, we observe a substantial improvement in pass@1 scores, with an increase of +22.3 (57.3 vs. 35.0) in HumanEval and +8.2 (51.8 vs. 43.6) in MBPP. Remarkably, despite its much smaller size, our WizardCoder even surpasses Anthropic’s Claude and Google’s Bard in terms of pass rates on HumanEval and HumanEval+. The contributions of this work can be summarized as follows: • We introduce WizardCoder, which enhances the performance of the open-source Code LLM, StarCoder, through the application of Code Evol-Instruct. • WizardCoder surpasses all other open-source Code LLMs by a substantial margin in terms of code generation, including StarCoder, CodeGen, CodeGee, CodeT5+, InstructCodeT5+, StarCoder-GPTeacher, and Instruct-Codegen-16B.
2306.08568#5
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
5
© Question: | am trying to add Midjourney Answer: Right-click on © to my own server. | have clicked the the “Midjourney Bot" User ‘show member icon’, what should | do in AssistGPT the next step? Show it on my provided screenshot. er )— @ a 627 seconds video, .., Midjourney Beginner Tutorial LLM —, plans what tool an image, .., the chat window with a painting on it touse r (executor) ~ e Thought: Locate the video clip regarding adding Midjourney to one's personal server. Run the selected tool hb Object Detection Observation: the target clip is from 3:58 - 04:15 . . (| visual-2: 2 17 seconds video, .., target clip for query “When was the video . @ video narration i G subtitle Grounding Thought: Infer what is the next step after selecting the ‘show member’ icon? OFA Region Grounding cr} OCR Detection = ASR Generate brief summary of Observation:the region is found visual inputs and visual-3: an image, .., target region for query "mid-journey bot" intermediate results 8 8 8: query j y ———— 1 rs Thought: I know the final answer. >) Final Answer: (Right-click on
2306.08640#5
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
5
If the sports car is partially occluded by a bag (as in Fig. 1), no VLM could provide the necessary context for reasoning over what actions to take. Such a system would instead need the ability to move the bag – or move itself – to actively gather the necessary information. Thus, in order to perform “grounded social reasoning” robots must go beyond passively querying LLMs and VLMs to obtain action plans and instead directly interact with
2306.08651#5
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08568
6
• WizardCoder achieves superior results in code generation compared to the largest closed- source LLMs, such as Claude, Bard, PaLM, PaLM-2, and LaMDA, despite being consider- ably smaller in size. # 2 Related Work Large Language Models. Recently, LLMs have demonstrated remarkable achievements across a broad spectrum of tasks. Prominent tech companies have made significant strides in developing highly proficient LLMs. These include OpenAI’s GPT3&4 [1, 2], Google’s PaLM [3, 4], and Bard3, DeepMind’s Chinchilla [5], and Gopher [6], as well as Anthropic’s Claude4. However, it is important to note that these models are closed-source and can only be accessed through specific APIs or may not be accessible at all.
2306.08568#6
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08651
6
1) Add Context Ci iters0 there is a sports car and.. 5) Pick Action Plan iterai The sports car is made of Legos | (a)Leave as is | > / 4)AskVLM (im), a5) Is the sports P car made of Legos? o («CA im' (Gi “Clean the surface” » 2) Ask Follow-up Questions Qé (b)Place the fallen model back onto its stand ()Arrange the remote- controlled car, controller, and cable neatly (d)Leave the collectible car in| its packaging ‘Robot takes close-up photo using front angle Best angle to ansiwer a athe question? top (e)Reattach loose parts (b) Front (o)back (d)left 3) Actively Perceive Scene
2306.08651#6
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
7
As black-box models, LLMs are also criticized for their lack of interpretability. LLMs represent knowledge implic- itly in their parameters. It is difficult to interpret or validate the knowledge obtained by LLMs. Moreover, LLMs perform reasoning by a probability model, which is an indecisive process [16]. The specific patterns and functions LLMs used to arrive at predictions or decisions are not directly accessible or explainable to humans [17]. Even though some LLMs are equipped to explain their predictions by applying chain-of-thought [29], their reasoning explanations also suf- fer from the hallucination issue [30]. This severely impairs the application of LLMs in high-stakes scenarios, such as medical diagnosis and legal judgment. For instance, in a medical diagnosis scenario, LLMs may incorrectly diagnose a disease and provide explanations that contradict medical commonsense. This raises another issue that LLMs trained on general corpus might not be able to generalize well to specific domains or new knowledge due to the lack of domain-specific knowledge or new training data [18].
2306.08302#7
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
7
The AI community has witnessed the release of several open-source LLMs, where the model weights are made publicly available. EleutherAI has contributed GPT-NeoX-20B [35] and GPT-J-6B [36]. Google has released UL2-20B [37]. Tsinghua University has introduced GLM-130B [7]. Meta has released OPT [9] and LLaMA [8]. It is worth noting that while these open-source models have made valuable contributions, they generally do not exhibit the same level of performance as their closed-source counterparts. Large Language Models for Code. Recent studies have introduced a significant number of LLMs for code-related tasks to address the challenges of code understanding and generation. OpenAI has unveiled Codex [16] and Code-Davinci [38]. Google has proposed PaLM-Coder [3]. They perform outstandingly on the popular code completion benchmarks, like HumanEval [31] and MBPP [33]. However, these models are closed-source.
2306.08568#7
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
7
Figure 1: In-the-wild example of AssistGPT. AssistGPT can reason in an interleaved language and code format. Given a query input and visual inputs, AssistGPT plans the problem-solving path in language, using structured code to call upon various powerful tools. The Inspector, part of the system, can manage visual inputs and intermediate results, assisting the Planner to invoke tools. Meanwhile, the Learner can assess the reasoning process and collect in-context examples. visual scenarios such as a long-form video with complicated scene switching, as shown in Fig. 1, the generated texts may go well beyond the query requirements. This can lead to an abundance of superfluous information while crucial details relevant to the query may be omitted.
2306.08640#7
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
7
Figure 1: Grounded Social Reasoning Framework. We demonstrate our framework using the sports car. Blue boxes indicate the model and yellow boxes indicate its output. Our framework takes an image of the scene and an instruction as input. 1) The VLM outputs an initial description of the scene C0 from the initial image im0. 2) The LLM asks follow-up questions about each object in the scene, Qi. 3) The robot takes a close-up image imi k of each object k. It is guided by an LLM that chooses the best angle that would help answer the question. 4) We pair the close-up images with the follow-up questions and ask the VLM to answer them. Answers are appended to the context. We repeat steps 1-4 to gather more information. 5) Finally, we query an LLM to choose the most socially appropriate way to tidy the object. the environment. Our insight is that robots must reason about what additional information they need to make socially appropriate decisions, and then actively perceive the environment to gather that information.
2306.08651#7
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
8
To address the above issues, a potential solution is to in- corporate knowledge graphs (KGs) into LLMs. Knowledge graphs (KGs), storing enormous facts in the way of triples, i.e., (head entity, relation, tail entity), are a structured and decisive manner of knowledge representation (e.g., Wiki- data [20], YAGO [31], and NELL [32]). KGs are crucial for various applications as they offer accurate explicit knowl- edge [19]. Besides, they are renowned for their symbolic reasoning ability [22], which generates interpretable results. KGs can also actively evolve with new knowledge contin- uously added in [24]. Additionally, experts can construct domain-specific KGs to provide precise and dependable domain-specific knowledge [23]. Nevertheless, KGs are difficult to construct [25], and current approaches in KGs [27], [33], [34] are inadequate in handling the incomplete and dynamically changing na- ture of real-world KGs. These approaches fail to effectively model unseen entities and represent new facts. In addition, they often ignore the abundant textual information in KGs. Moreover, existing methods in KGs are often customized for specific KGs or tasks, which are not generalizable enough. Therefore, it is also necessary to utilize LLMs to address the challenges faced in KGs. We summarize the pros and cons of LLMs and KGs in Fig. 1, respectively.
2306.08302#8
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
8
On the other hand, there are several open-source Code LLMs available. Salesforce has introduced CodeGen [13], CodeT5 [17], and CodeT5+ [18]. Tsinghua University has contributed CodeGeeX [14], and the BigCode Project has developed StarCoder [11]. These models have demonstrated notable advancements in code-related tasks. However, when compared to the SOTA closed-source models, they still lag behind significantly. In contrast to the aforementioned models without instruction fine-tuning, our work demonstrates that further training Code LLMs with Code Evol-Instruct can substantially enhance performance. 3https://bard.google.com/ 4https://www.anthropic.com/index/introducing-claude 2
2306.08568#8
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
8
Some other concurrent works propose decomposing user queries into subtasks and plan to sequentially call external models or APIs to answer them. Currently, there are two branches of methods. The first one is language-based planning [17–20]. For instance, HuggingGPT and Chameleon [17, 19] propose using an LLM as a controller, managing and organizing the cooperation of expert models. Another branch of work is code-based planning [21–23]. ViperGPT [21] proposes to use Codex to write the Python code to call visual-related APIs for handling multi-modal tasks. These approaches allow for invoking the models only when necessary, which allows models to only output useful information and optimize the use of computation resources.
2306.08640#8
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
8
Acting on this insight, we propose a framework to enable a robot to perform grounded social reasoning by iteratively identifying details it still needs to clarify about the scene before it can make a decision (e.g. is the model car made out of intricate Lego pieces or MEGA Bloks?) and actively gathering new observations to help answer those questions (e.g. getting a close up of the car from a better angle). In this paper, we focus on the task of cleaning up real-world surfaces in a socially appropriate manner. Our framework is shown in Fig. 1. Given a textual description of the desk, an LLM asks follow-up questions about the state of each object that it needs in order to make a decision of what the robot should do with that object. The robot actively perceives the scene by taking close-up photos of each object from angles suggested by the LLM. The follow-up questions and close-up photos are then given to a VLM so that it can provide more information about the scene. This process can be repeated multiple times. The LLM then decides on an action the robot should take to clean the object in a socially appropriate manner. For example, our robot leaves the Lego sports car intact, throws a browning
2306.08651#8
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
9
Recently, the possibility of unifying LLMs with KGs has attracted increasing attention from researchers and practi- tioners. LLMs and KGs are inherently interconnected and can mutually enhance each other. In KG-enhanced LLMs, KGs can not only be incorporated into the pre-training and inference stages of LLMs to provide external knowledge [35]–[37], but also used for analyzing LLMs and provid- ing interpretability [14], [38], [39]. In LLM-augmented KGs, LLMs have been used in various KG-related tasks, e.g., KG embedding [40], KG completion [26], KG construction [41], KG-to-text generation [42], and KGQA [43], to improve the performance and facilitate the application of KGs. In Syn- ergized LLM + KG, researchers marries the merits of LLMs and KGs to mutually enhance performance in knowledge representation [44] and reasoning [45], [46]. Although there are some surveys on knowledge-enhanced LLMs [47]–[49], which mainly focus on using KGs as an external knowledge to enhance LLMs, they ignore other possibilities of integrat- ing KGs for LLMs and the potential role of LLMs in KG applications.
2306.08302#9
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
9
3https://bard.google.com/ 4https://www.anthropic.com/index/introducing-claude 2 Instruction Fine-Tuning. The primary objective of instruction fine-tuning in its early stages was to enhance the cross-task generalization capabilities of LMs. This was achieved by fine-tuning LMs with a substantial corpus of public NLP tasks. T5 [19] was among the first models to explore this approach, training on a multitude of supervised text-to-text tasks. Subsequent works such as FLAN [20], ExT5 [22], T0 [23], and UnifiedQA [25] further expanded the range of tasks to bolster the overall generalization ability of LMs. Notably, ZeroPrompt [24] and FLAN-T5 [21] pushed the envelope by incorporating thousands of tasks in their training pipelines. Across these studies, a consistent finding emerges: fine-tuning LMs with diverse NLP task instructions yields significant performance improvements when applied to new tasks.
2306.08568#9
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
9
Despite this progress, addressing high-level queries is still challenging. Specifically, current questions in existing benchmarks usually directly imply how to plan the reasoning. For example, for questions like "What is the red object used for?", no matter what the image is, the reasoning steps are relatively fixed, i.e., recognize the red object, then figure out its function. However, for more complex questions, there could be diverse reason paths. For example, for question “How much black pepper should I use for 700g beef?” in Fig. 2, the variations in the presentation of relevant information, whether it’s in the form of subtitles, actions, text within videos, or a combination of these, can result in distinct reasoning paths. Therefore, as shown in Fig. 2, once a reason-only approach makes a mistake, it becomes difficult for it to self-correct. Similar approaches are already proposed in the NLP field, such as ReAct [24] and ToolFormer [25]. However, there is a unique challenge in multimodal tasks: How to handle non-textual intermediate 2
2306.08640#9
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
9
action the robot should take to clean the object in a socially appropriate manner. For example, our robot leaves the Lego sports car intact, throws a browning half-eaten banana in the trash, but keeps an unopened can of Yerba Mate on the desk. Furthermore, we release the MESSYSURFACES dataset containing images of 70 surfaces as well an evaluation benchmark that assesses how well a robot can clean up each surface in a socially appropriate manner. The dataset is available here.
2306.08651#9
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
10
In this article, we present a forward-looking roadmap for unifying both LLMs and KGs, to leverage their respective strengths and overcome the limitations of each approach, for various downstream tasks. We propose detailed cate- gorization, conduct comprehensive reviews, and pinpoint emerging directions in these fast-growing fields. Our main contributions are summarized as follows: 1) Roadmap. We present a forward-looking roadmap for integrating LLMs and KGs. Our roadmap, consisting of three general frameworks to unify LLMs and KGs, namely, KG-enhanced LLMs, LLM- augmented KGs, and Synergized LLMs + KGs, pro- vides guidelines for the unification of these two distinct but complementary technologies. 2) Categorization and review. For each integration framework of our roadmap, we present a detailed categorization and novel taxonomies of research on unifying LLMs and KGs. In each category, we review the research from the perspectives of differ- ent integration strategies and tasks, which provides more insights into each framework. 3) Coverage of emerging advances. We cover the advanced techniques in both LLMs and KGs. We include the discussion of state-of-the-art LLMs like ChatGPT and GPT-4 as well as the novel KGs e.g., multi-modal knowledge graphs.
2306.08302#10
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
10
While fine-tuning LMs with diverse NLP tasks has shown promising results, it often falls short in aligning with the intentions of real-world users. OpenAI has pursued a different approach by soliciting human annotators to provide a large corpus of human instructions, encompassing diverse forms and a wide range of task types. Building upon this dataset, OpenAI trained its GPT3 [1] model to create InstructGPT [10], which better aligns with users’ inputs. This line of development has even led to the impressive work known as ChatGPT. However, it is important to note that the dataset and model weights associated with these advancements are not publicly available. Alpaca [26] takes a different route by adopting the self-instruct method [27], leveraging ChatGPT to generate data for training. Vicuna [28] utilizes user-shared conversations collected from ShareGPT.com to train its models. WizardLM [29] introduces the Evol-Instruct method, which involves evolving existing instruction data to generate more complex and diverse datasets. In contrast to these general instruction fine-tuning approaches, our WizardCoder successfully applies the Evol-Instruct method specifically in the domain of Code LLMs. # 3 Approach
2306.08568#10
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
10
2 Inputs |. Question: How much black pepper should | use for 700g beef? Reason-only with Language Thought: I need to see if the video gives specific measurements for using black pepper on beef. Acton: Subtitle Ground Action Input: black pepper for beef Thought: I need to infer the amount of black pepper for 700g beef based on the information from the video. Acton: Subtitle Reason Action Input: how much black pepper should I use for 700g beef? Risk: the subtitles may not provide relevant context. Reason-only with Code Parse Question: code: Code Execution: Risk: When introducing the measurements, beef and pepper may not be appeared together. ReAct with PEIL Visual-@: 2 197 seconds video, user provided, sparse subtitle, How to Cook Perfect Beef Stir Fry Every Time use Subtitle Ground Module to Thought: Observation: Clip Not Found. Thought: I need to know more information. . Observation: ... 3-6: a person cutting a piece of beef on a cutting board. some texts on the frame (Beef 350 g Taste) .. Thought: I need to infer how much black pepper should I use for 700g beef Observation: 1 tsp Figure 2: Comparison of PEIL and two mainstream reasoning methods in multi-modal tasks.
2306.08640#10
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
10
We evaluate our framework on our benchmark dataset as well as on a real-world robotic system. We examine each component of our framework, asking whether the robot asks useful follow-up questions, whether the robot chooses informative close-up images, and whether the images actually help a VLM more accurately answer questions. We find an average 12.9% improvement on the MESSYSURFACES benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. # 2 Related Work Social Reasoning. Large language models are trained on internet-scale data, making them effective commonsense reasoners [15, 16, 17, 18]. Prior works have studied whether LLMs’ commonsense enables social reasoning aligned with human values [12, 13, 19, 14]. There is evidence that when LLMs make moral or social judgements, they align with the normative beliefs of the population that generated their training data [20]. In addition, prior work show social reasoning models can align with conventional beliefs [21, 22, 23, 24]. Our approach is in line with normative social reasoning; instead of adapting to individual preferences, we show we can take commonsense, socially-appropriate actions to clean up a scene.
2306.08651#10
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
11
4) Summary of challenges and future directions. We highlight the challenges in existing research and present several promising future research direc- tions. The rest of this article is organized as follows. Section 2 first explains the background of LLMs and KGs. Section 3 introduces the roadmap and the overall categorization of this article. Section 4 presents the different KGs-enhanced LLM approaches. Section 5 describes the possible LLM- augmented KG methods. Section 6 shows the approaches of synergizing LLMs and KGs. Section 7 discusses the challenges and future research directions. Finally, Section 8 concludes this paper. 2 BACKGROUND In this section, we will first briefly introduce a few rep- resentative large language models (LLMs) and discuss the prompt engineering that efficiently uses LLMs for varieties of applications. Then, we illustrate the concept of knowl- edge graphs (KGs) and present different categories of KGs. # 2.1 Large Language models (LLMs) Large language models (LLMs) pre-trained on large-scale corpus have shown great potential in various NLP tasks [13]. As shown in Fig. 3, most LLMs derive from the Trans- former design [50], which contains the encoder and decoder 2 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
2306.08302#11
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
11
# 3 Approach In this section, we elaborate on the methodological details of WizardCoder. Following WizardLM, we apply the Evol-Instruct method to evolve Code Alpaca generated using self-instruct and fine-tune the pre-trained Code LLM StarCoder with the evolved data. # 3.1 Evol-Instruct Prompts for Code Inspired by the Evol-Instruct [29] method proposed by WizardLM, this work also attempts to make code instructions more complex to enhance the fine-tuning effectiveness of code pre-trained large models. To adapt Evol-Instruct to the realm of code, we made the following modifications to the evolutionary prompt: 1. Streamlined the evolutionary instructions by removing deepening, complicating input, and In-Breadth Evolving. 2. Simplified the form of evolutionary prompts by unifying the evolutionary prompt template. 3. Addressing the specific characteristics of the code domain, we added two evolutionary instructions: code debugging and code time-space complexity constraints. The unified code evolutionary prompt template is as follows: Please increase the difficulty of the given programming test question a bit. You can increase the difficulty using, but not limited to, the following methods: {method} {question} Here, {question} represents the current code instruction awaiting evolution, and {method} is the type of evolution. The five types we used are listed as follows: 3
2306.08568#11
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
11
Figure 2: Comparison of PEIL and two mainstream reasoning methods in multi-modal tasks. result? For ReAct and ToolFormer, the outputs of external models can be directly fed into the Planner and passed to subsequent models. While the intermediate results obtained in multimodal tasks usually are cropped regions for the image grounding module, and segmented video clips for the temporal location module, as shown in Fig. 1. In complex cases, it is hard for Planner to manage which information should be fed into the next module.
2306.08640#11
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
11
Learning Human Preferences. Past work on aligning with human preferences has focused on using human feedback to infer rewards and policies by designing queries for active preference learning [25, 4, 6, 26], performing inverse reinforcement learning [27, 28], or recovering reward signals from language feedback [14, 29, 30, 31, 32]. Policies defined via LLMs have also been directly tuned with language feedback by approaches like RLHF [33]. Instead of querying humans, we leverage normative values from pre-trained models. While some works use normative values from LLMs in negotiations and 2 games [34], these are not grounded in the real world. In this work, we do not focus on particular human preferences, though the normative responses of LLMs could be fine-tuned for particular applications. Active Perception. When robots must reason socially like humans, active information gathering may be important [35]. Approaches like TidyBot actively zoom-in on objects to better categorize them [36]. Other approaches such as Inner Monologue seek out additional environment information, but need aid from a human annotator or assume access to simulators [37, 38]. VLMs have also been used for active perception in navigation [39, 40, 41]. In this work, we show that active perception is necessary for grounded social reasoning, enabled by the semantic knowledge in an LLM.
2306.08651#11
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
12
2 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 110M 117M-1.58 1758 Unknown > (GPT-1}—{GPT-2} >{GPT-3} >{ChatGPT GPT-4 2 output Text 10M-340M 4.2T 1758 1758 1378 g (Glam) (OPT }—OPT-IML] {Bard } Ss — = — 3 —~ YY —— Input Text jopher; LaMDA| | PaLM LLaMa icuna a tampa) 5408 B-658 B Flan PaLM Output Text 140M 4.18-2698 208 208 $5 BART (ST-MoE ] [UL2_}——+ Flan-UL2 } 33 88 Features co 1B as nN = ar) - —>{Flan-T5 Input Text mS (Gian=15) 11M-223M (| Open-Source 2 BERT }—DistilBert = § Features [ALBERT| [ELECTRA [} Closed-Source 3 ° 8 iv Input Text wi RoBERTA] [ERNIE] {DeBERTa] 725M-355M Tra aM 3OaNT —{2018} {2019} {2020}. {2021} {2022} 2023 >
2306.08302#12
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
12
{question} Here, {question} represents the current code instruction awaiting evolution, and {method} is the type of evolution. The five types we used are listed as follows: 3 Add new constraints and requirements to the original problem, adding approximately 10 additional words. Replace a commonly used requirement in the programming task with a less common and more specific one. If the original problem can be solved with only a few logical steps, please add more reasoning steps. Provide a piece of erroneous code as a reference to increase misdirection. Propose higher time or space complexity requirements, but please refrain from doing so frequently. # 3.2 Training WizardCoder We employ the following procedure to train WizardCoder. Initially, we utilize StarCoder 15B [11] as the foundation and proceed to fine-tune it using the code instruction-following training set, which was evolved through Evol-Instruct. The prompt format for fine-tuning is outlined as follows: Below is an instruction that describes a task, paired with an input that Write a response that appropriately completes provides further context. the request. ### Instruction: {instruction} ### Response:
2306.08568#12
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
12
In this paper, we propose a multi-modal AI Assistant system, named AssistGPT (The design of our model’s icon is inspired by the HAL 9000 from the movie “A Space Odyssey”, a fictional artificial intelligence character), with interleaved language and code reasoning method, inheriting the advantages of flexible reasoning in ReAct and robust tool invocation in Program-based planning. Specifically, our system consists of four parts, Planner, Executor, Inspector, and Learner. We show how our system works in Fig. 1. Similar to ReAct, the Planner thinks about what needs to be done next based on the current reasoning progress and invoking external models. What sets our method apart is the use of formatted code to invoke external models. The Executor wraps external tools into a uniform input and output format, allowing the tool to be invoked with structural commands. Simultaneously, we have also proposed an Inspector, which manages visual inputs and intermediate results during the reasoning process. It provides the Planner with summaries and metadata of all currently available visual materials. The combination of the Inspector and the Executor allows the model to efficiently
2306.08640#12
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
12
LLMs for Robotics. Past work uses semantic knowledge in LLMs for task planning. Methods like Say- Can decompose natural language tasks into primitive action plans [42, 43, 44]. In addition, approaches such as Code as Policies [45, 46] use LLMs to write Python programs that plan with executable robot policy code. Other approaches use multimodal sequence models to reason about language-conditioned manipulation [47, 48, 49, 50]. We use the semantic awareness of an LLM to reason about action plans. Unlike the above works, an LLM interactively queries an off-the-shelf VLM to obtain a grounded understanding of the scene. # 3 Grounding Social Reasoning
2306.08651#12
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
13
Fig. 2. Representative large language models (LLMs) in recent years. Open-source models are represented by solid squares, while closed source models are represented by hollow squares. Self-Attention Decoder Feed Forward Encoder-Decoder Attention Self-Attention Encoder Feed Forward Self-Attention sponsible for encoding the input sentence into a hidden- space, and the decoder is used to generate the target output text. The training strategies in encoder-decoder LLMs can be more flexible. For example, T5 [3] is pre-trained by masking and predicting spans of masking words. UL2 [54] unifies several training targets such as different masking spans and masking frequencies. Encoder-decoder LLMs (e.g., T0 [55], ST-MoE [56], and GLM-130B [57]) are able to directly resolve tasks that generate sentences based on some context, such as summariaztion, translation, and question answering [58]. Fig. 3. An illustration of the Transformer-based LLMs with self-attention mechanism. # 2.1.3 Decoder-only LLMs.
2306.08302#13
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
13
Below is an instruction that describes a task, paired with an input that Write a response that appropriately completes provides further context. the request. ### Instruction: {instruction} ### Response: To construct the training dataset, we initialized it with the 20K instruction-following dataset called Code Alpaca5. We iteratively employ the Evol-Instruct technique on this dataset consisting of 20,000 samples to produce evolved data. After each round of data evolution, we merge the evolved data from all previous rounds with the original dataset to finetune StarCoder and assess the pass@1 metric on HumanEval [31]. Once we observe a decline in the pass@1 metric, we will discontinue the usage of Evol-Instruct and choose the model with the highest pass@1 as the ultimate model. # 4 Experiment This section begins by providing a comprehensive overview of the baseline models in our experiments. Subsequently, we present the performance of our models on four code generation benchmarks: HumanEval [31], HumanEval+ [32], MBPP [33], and DS-1000 [34]. # 4.1 Baselines
2306.08568#13
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
13
the Planner with summaries and metadata of all currently available visual materials. The combination of the Inspector and the Executor allows the model to efficiently implement complex reasoning. Moreover, it is challenging for the model to ensure correct reasoning in a zero-shot scenario. The Planner might output invalid code or unreasonable paths. To enable the system to continuously improve, we proposed the Learner, which checks whether the prediction process is reasonable or judges the correctness of the predicted results based on annotations. It allows the system to try multiple times and record successful examples as in-context examples.
2306.08640#13
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
13
We propose a framework that combines existing foundation models in a novel way to enable active information gathering, shown in Fig. 1. Our framework makes multiple calls to an LLM and VLM to gather information. The LLM plays a number of distinct roles in our framework that we distinguish below: generating informative follow-up questions, guiding active perception, and choosing an action plan. In every call, the LLM takes in and outputs a string LLM:A∗ →A∗, and the VLM takes in an image, string pair and outputs a string VLM:I ×A∗ →A∗, where A∗ is the set of all strings and I is the set of all images. The context Ci ∈A∗ contains information about the scene that the robot has gathered up to iteration i of the framework. Initially, the inputs to our framework are an image of the scene im0 ∈I (i.e., an unblurred image from Fig. 1) and an instruction (e.g., “clean the surface”). VLM Describes the Scene. Our framework starts with the VLM producing an initial description C0 of the scene from the scene image im0. Depending on the VLM, the description can contain varying amounts of information —
2306.08651#13
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
14
Fig. 3. An illustration of the Transformer-based LLMs with self-attention mechanism. # 2.1.3 Decoder-only LLMs. modules empowered by a self-attention mechanism. Based on the architecture structure, LLMs can be categorized into three groups: 1) encoder-only LLMs, 2) encoder-decoder LLMs, and 3) decoder-only LLMs. As shown in Fig. 2, we sum- marize several representative LLMs with different model architectures, model sizes, and open-source availabilities. # 2.1.1 Encoder-only LLMs. Encoder-only large language models only use the encoder to encode the sentence and understand the relationships between words. The common training paradigm for these model is to predict the mask words in an input sentence. This method is unsupervised and can be trained on the large-scale corpus. Encoder-only LLMs like BERT [1], AL- BERT [51], RoBERTa [2], and ELECTRA [52] require adding an extra prediction head to resolve downstream tasks. These models are most effective for tasks that require understand- ing the entire sentence, such as text classification [26] and named entity recognition [53].
2306.08302#14
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
14
# 4.1 Baselines Closed-Source Models. Multiple technology companies have successfully developed highly profi- cient LLMs while choosing not to publicly release them. These models are referred to as closed-source models. For our research, we incorporate a substantial number of these models as our baselines. Specifically, our baselines encompass the following: (i) OpenAI’s GPT3.5&4 [2], Code-Davinci- 002 [38], Code-Cushman-001 [38], and Codex [16]; (ii) Google’s Bard, PaLM 2 [4], PaLM [3], and LaMDA [40]; (iii) Google DeepMind’s AlphaCode [12]; and (iv) Anthropic’s Claude. Open-Source Models. Several open-source LLMs have been made available to the AI commu- nity, although their performance generally lags behind the closed-source models a lot. As part of our research, we incorporate a significant number of these open-source models as our base- lines. Our baselines encompass the following models: StarCoder [11], LLaMa [8], CodeGen [13], # 5https://github.com/sahil280114/codealpaca 4
2306.08568#14
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
14
The current version of AssistGPT integrates 10+ tools for different functions, including image detection, captioning, region grounding, temporal grounding, OCR Module, object enumeration, speech-to-text, etc. By combining these functionalities, AssistGPT can accomplish a wide range of multi-modal tasks which are still hard for existing systems. In summary, our contributions are as follows: 1) We constructed a general multimodal AI assistant that can accomplish diverse visual-related tasks with the cooperation of multiple models. 2) We propose a new compositional reasoning method that reasons over in an interleaved language and code manner. A simple learning mechanism is also proposed to improve the AssistGPT’s ability in planning. 3) We showcase AssistGPT’s capabilities not only the benchmark results but also some realistic applications for processing complex images and long-form videos, understanding high-level queries, and handling flexible inputs. # 2 Related Work
2306.08640#14
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
14
with the VLM producing an initial description C0 of the scene from the scene image im0. Depending on the VLM, the description can contain varying amounts of information — in the most uninformative case, it may simply list the objects that are present. In our experiments, this is the description that we use. LLM Generates Follow-Up Questions. To identify what information is missing from C0, we use an LLM to generate informative follow-up questions as shown in stage (2) of Fig. 1. We prompt an LLM with C0 and ask the LLM to produce a set of follow-up questions Qi ={qi K} for the K objects. LLMs are apt for this task because of their commonsense reasoning abilities. We use Chain-of-Thought prompting [51] where we first ask the LLM to reason about the socially appropriate way to tidy each object before producing a follow-up question (see examples in the supplementary). For example, the LLM could reason that the sports car should be put away if it is a toy but left on display if someone built it. The resulting follow-up question asks whether the sports car is built with Lego blocks. We assume that the information in C0 is accurate (i.e., correctly lists the names of all the
2306.08651#14
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
15
Decoder-only large language models only adopt the de- coder module to generate target output text. The training paradigm for these models is to predict the next word in the sentence. Large-scale decoder-only LLMs can generally perform downstream tasks from a few examples or simple instructions, without adding prediction heads or finetun- ing [59]. Many state-of-the-art LLMs (e.g., Chat-GPT [60] and GPT-44) follow the decoder-only architecture. However, since these models are closed-source, it is challenging for academic researchers to conduct further research. Recently, Alpaca5 and Vicuna6 are released as open-source decoder- only LLMs. These models are finetuned based on LLaMA [61] and achieve comparable performance with ChatGPT and GPT-4. # 2.1.4 Prompt Engineering Prompt engineering is a novel field that focuses on creating and refining prompts to maximize the effectiveness of large language models (LLMs) across various applications and re- search areas [62]. As shown in Fig. 4, a prompt is a sequence # 2.1.2 Encoder-decoder LLMs.
2306.08302#15
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
15
# 5https://github.com/sahil280114/codealpaca 4 % Tests Passed mHumanEval © HumaneEval+ 85.4 78.0 nee 639! 1 60.4 598 55 aim | 47.6 44.5 ! I 40.9 39.0 36.6 341 : : 30.5 323,74 ; i 38195 732213 18.969 ‘a! ii ee rs x 2 gs < ~ & é Co & & $ & & es Se é & x 7 * x a e S&S & J s : 3 a ee ai Figure 1: The percentage of pass rates on the HumanEval (164 problems) with a single attempt. All baseline scores are retrieved from the LLM-Humaneval-Benchmarks [39]. Our WizardCoder generates an answer with greedy decoding. CodeGeeX [14], CodeT5+[18], and InCoder[15]. In addition, we also include several models with instructions fine-tuning, including StarCoder-GPTeacher,6 Instruct-Codegen-16B,7 Guanaco-65B,8 and Falcon-40B-Instruct.9 # 4.2 Implementation Details
2306.08568#15
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
15
# 2 Related Work Multi-modal Systems. Prior to the advent of LLM, remarkable works were done to design multi- modal models for one or several specific tasks, such as focusing on visual appearance [26–31], visual- related knowledge [32–37], action [38–40], ego-centric videos [41–44], instructional videos [45–47], scene text [48–51], etc. They have achieved commendable results in specific tasks, however, their 3 generalizability is relatively limited, making it challenging to address more complex and diverse questions in real-world scenarios.
2306.08640#15
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08302
16
# 2.1.2 Encoder-decoder LLMs. Encoder-decoder large language models adopt both the encoder and decoder module. The encoder module is re4. https://openai.com/product/gpt-4 5. https://github.com/tatsu-lab/stanford alpaca 6. https://lmsys.org/blog/2023-03-30-vicuna/ 3 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY Output Instruction; Text: This is awesome! ‘ Sentiment: Positive ‘ Context : ot Prompt Text: This is bad! Sentiment: Negative Input Text ! Text: | think the vacation is okay. ; \ (Sentiment: : Fig. 4. An example of sentiment classification prompt. of natural language inputs for LLMs that are specified for the task, such as sentiment classification. A prompt could contain several elements, i.e., 1) Instruction, 2) Context, and 3) Input Text. Instruction is a short sentence that instructs the model to perform a specific task. Context provides the context for the input text or few-shot examples. Input Text is the text that needs to be processed by the model.
2306.08302#16
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
16
# 4.2 Implementation Details The StarCoder [11] serves as our basic foundation model. The evolved dataset consists of approxi- mately 78k samples. To fine-tune the basic models, we employ specific configurations, including a batch size of 512, a sequence length of 2048, 200 fine-tuning steps, 30 warmup steps, a learning rate of 2e-5, a Cosine learning rate scheduler, and fp16 mixed precision. # 4.3 Evaluation on HumanEval, HumanEval+, and MBPP HumanEval [31], HumanEval+ [32] and MBPP [33] are extensively utilized benchmarks within the field of Code LLMs. These benchmarks encompass a vast collection of Python programming problems, employing test cases to validate the code generated by Code LLMs. HumanEval consists of 164 original programming problems, with an average of 9.6 test cases allocated to each problem. To ensure a thorough assessment of the functional correctness of LLM-synthesized code, HumanEval+ extends the number of test cases significantly, averaging at 774.8 test cases per problem. On the other hand, MBPP offers a set of 500 test programming problems, accompanied by three automated test cases per problem. The prompt format for these tasks is as follows: Below is an instruction that describes a task, paired with an input that Write a response that appropriately completes provides further context. the request.
2306.08568#16
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
16
3 generalizability is relatively limited, making it challenging to address more complex and diverse questions in real-world scenarios. Recently, two types of strategies are proposed for developing a general multi-modal system. One is pre-training LLM to support visual features as conditional inputs. The representative models are GPT- 4 [52], PaLM-E [53], BLIP-2 [54], and Mini-GPT4 [55]. Despite these methods being capable of directly processing multi-modal input, they still exhibit limitations in addressing advanced functional needs, such as image spatial grounding, long-form video grounding, and audio comprehension. Additionally, the computational cost of scaling these models can be extremely high. The alternative strategy aims to combine multiple models or APIs to accomplish complex multi-modal reasoning. For instance, models like the Socratic model [6] and Visual ChatGPT [8] achieve this by connecting ChatGPT with image generation models. HuggingGPT [17] combines a variety of Huggingface models with LLMs. ViperGPT [21] employs Codex [56] to call visual APIs via Python programming. Our AssistGPT falls into the second category by combining and invoking various modules for multi-modal reasoning, but we propose a new framework PEIL for integrating external tools and models.
2306.08640#16
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
16
Robot Actively Perceives the Scene. At this stage, one might normally query the VLM with the original scene image im0. However if the object-in-question is obstructed or too small to see, the scene image might not provide enough information for the VLM to answer the follow-up question accurately (e.g., the sports car is obstructed in Fig. 1). Instead, we would like to provide an unobstructed close-up image imi k ∈I of the object k to “help” the VLM accurately answer the generated questions. Taking informative close-up images requires interaction with the environment — something we can use a robot for.
2306.08651#16
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
17
Prompt engineering seeks to improve the capacity of large large language models (e.g., ChatGPT) in diverse complex tasks such as question answering, sentiment clas- sification, and common sense reasoning. Chain-of-thought (CoT) prompt [63] enables complex reasoning capabilities through intermediate reasoning steps. Prompt engineering also enables the integration of structural data like knowl- edge graphs (KGs) into LLMs. Li et al. [64] simply linearizes the KGs and uses templates to convert the KGs into pas- sages. Mindmap [65] designs a KG prompt to convert graph structure into a mind map that enables LLMs to perform reasoning on it. Prompt offers a simple way to utilize the potential of LLMs without finetuning. Proficiency in prompt engineering leads to a better understanding of the strengths and weaknesses of LLMs. # 2.2 Knowledge Graphs (KGs)
2306.08302#17
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
17
Below is an instruction that describes a task, paired with an input that Write a response that appropriately completes provides further context. the request. ### Instruction: Create a Python script for this problem: {Question} # ### Response: 6https://huggingface.co/GeorgiaTechResearchInstitute/starcoder-gpteacher-code-instruct 7https://huggingface.co/sahil2801/instruct-codegen-16B 8https://huggingface.co/TheBloke/guanaco-65B-HF 9https://huggingface.co/tiiuae/falcon-40b-instruct 5 Table 1: Results of pass@1(%) on HumanEval and MBPP. Most scores are retrieved from the papers of StarCoder [11] and CodeT5+ [18]. We follow the previous works [31] to generate n samples to estimate the pass@1 score with the same set of hyper-parameters: temperate=0.2, and top_p=0.95. *: we evaluate this model by ourselves.
2306.08568#17
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
17
Compositional Reasoning. Compositional reasoning methods in the field of visual question an- swering usually decompose questions into several subtasks, each addressed by a specific module. This kind of method offers strong interpretability due to its modular structure and the clear division of responsibilities among the individual components. This idea was initially put forward by [57]. Subsequently, [58, 59] introduced an end-to-end variant based on LSTM and CNN. Traditional com- positional reasoning methods are limited by language models’ parsing capabilities, often requiring ground-truth question decomposition or reinforcement learning for optimal module usage.
2306.08640#17
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
17
To actively gather information, the robot should proceed based on some notion of “informativeness” of cam- era angles. To determine “informativeness”, we can again rely on the commonsense knowledge of LLMs. Although LLMs don’t have detailed visual information about the object, they can suggest reasonable angles that will be, on average, informative. For instance, an LLM will choose to take a photo from the top of an opaque mug, instead of its sides, to see its content. In practice, we find that this approach works well and can improve the informativeness of an image by 8%. We query an LLM to choose a close-up angle of the object from a set of angles {<FRONT>, <BACK>, <LEFT>, <RIGHT>, <TOP>} that would give an un- obstructed view. We then pair the close-up images with their questions {(imi K)} and query the VLM for answers to these questions in step (4) of our framework. We concatenate the VLM’s answers for each object and append them to our context Ci to complete the iteration. To gather more information about each object, steps 1−4 can be repeated where the number of iterations is a tunable parameter. 3
2306.08651#17
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
18
# 2.2 Knowledge Graphs (KGs) Knowledge graphs (KGs) store structured knowledge as a collection of triples KG = {(h, r, t) ⊆ E × R × E}, where E and R respectively denote the set of entities and relations. Existing knowledge graphs (KGs) can be classified into four groups based on the stored information: 1) encyclopedic KGs, 2) commonsense KGs, 3) domain-specific KGs, and 4) multi- modal KGs. We illustrate the examples of KGs of different categories in Fig. 5. 2.2.1 Encyclopedic Knowledge Graphs. Encyclopedic knowledge graphs are the most ubiquitous KGs, which represent the general knowledge in real-world. Encyclopedic knowledge graphs are often constructed by integrating information from diverse and extensive sources, including human experts, encyclopedias, and databases. Wikidata [20] is one of the most widely used encyclopedic knowledge graphs, which incorporates varieties of knowl- edge extracted from articles on Wikipedia. Other typical Fy 6 ss go 5 caro sg og Ole 20%, GB Hg alo, © Commonsense Domain-specific Developmental Disorder Multi-modal Knowledge Graphs
2306.08302#18
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
18
Closed-source models LaMDA [40] AlphaCode [12] PaLM [3] PaLM-Coder [3] PaLM 2-S [4] Codex [16] Codex [16] Code-Cushman-001 [38] Code-Davinci-002 [38] GPT-3.5 [2] GPT-4 [2] 137B 1.1B 540B 540B - 2.5B 12B - - - - 14.0 17.1 26.2 36.0 37.6 21.4 28.8 33.5 47.0 48.1 67.0 - - 36.8 47.0 50.0 - - 45.9 58.1 - - Open-source models LLaMa [8] LLaMa [8] CodeGen-Multi [13] CodeGen-Mono [13] CodeGeeX [14] StarCoder [11] CodeT5+ [18] InstructCodeT5+ [18] 33B 65B 16B 16B 13B 15B 16B 16B 21.7 23.7 18.3 29.3 22.9 33.6 30.9 35.0 30.2 37.7 20.9 35.3 24.4 43.6∗ - - WizardCoder 15B 57.3 (+22.3) 51.8 (+8.2)
2306.08568#18
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
18
With the advent of LLMs, question decomposition can be accomplished remarkably well in a zero- shot manner. Chain-of-thought prompts [60], Toolformer [25], and ReAct [24] enable models to plan how to solve an NLP problem. HuggingGPT [17] and ViperGPT [21] are multi-modal systems that use LLM to parse a question into a series of reasoning steps. However, for complex queries, the model needs to determine the subsequent steps based on not only questions but also visual inputs or feedback from previously executed modules. MMReAct [61] introduced the idea of ReAct to a multi-modal system to overcome it, while it is still under development and hasn’t demonstrated its effectiveness on the benchmark. Previous methods reason over either language reasoning or code, and as stated in the introduction, both have certain shortcomings. Our work first proposes an interleaved language and code reasoning manner which can better handle general queries and complex visual inputs.
2306.08640#18
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
18
3 Example Images for ‘cup* Benchmark Question for “cup” Es lel is Scene Top Front State: The cup is clean and empty (a) Action: Leave the cup as is State: The cups filled witha beverage | [V7] (b) Action: Place cup on coaster v State: The cup is empty but has dried residue inside —— (©) Action: Clean and dry the cup \ State: The cup is filled with pens and office supplies = (4) Action: Organize the supplies in the cup State: The cup is chipped or cracked Back Left Right (€) Action: Dispose of the cup Figure 2: MESSYSURFACES Example. Each object in MESSYSURFACES is represented by a scene image and 5 close-up images. Each object also has a benchmark question that presents 5 options to tidy the object; each option is constructed by producing a cleaning action conditioned on a hypothetical object state.
2306.08651#18
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
19
Fy 6 ss go 5 caro sg og Ole 20%, GB Hg alo, © Commonsense Domain-specific Developmental Disorder Multi-modal Knowledge Graphs Fig. 5. Examples of different categories’ knowledge graphs, i.e., encyclo- pedic KGs, commonsense KGs, domain-specific KGs, and multi-modal KGs. encyclopedic knowledge graphs, like Freebase [66], Dbpedia [67], and YAGO [31] are also derived from Wikipedia. In ad- dition, NELL [32] is a continuously improving encyclopedic knowledge graph, which automatically extracts knowledge from the web, and uses that knowledge to improve its per- formance over time. There are several encyclopedic knowl- edge graphs available in languages other than English such as CN-DBpedia [68] and Vikidia [69]. The largest knowledge graph, named Knowledge Occean (KO)7, currently contains 4,8784,3636 entities and 17,3115,8349 relations in both En- glish and Chinese. # 2.2.2 Commonsense Knowledge Graphs.
2306.08302#19
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
19
Comparing with the Closed-Source Models. The SOTA LLMs for code generation, such as GPT4, Claude, and Bard, are predominantly closed-source. Acquiring access to the APIs of these models proves challenging. In this study, we adopt an alternative approach by retrieving the scores for HumanEval and HumanEval+ from the LLM-Humaneval-Benchmarks [39]. Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. To maintain consistency, we employ the same experimental setup by generating answers using greedy decoding and evaluate our WizardCoder using the provided evaluation codes. By adhering to these standardized procedures, we aim to ensure fair and comparable evaluations of our model against existing benchmarks. As depicted in Figure 1, our WizardCoder attains the third position in this benchmark, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models. Furthermore, our WizardCoder demonstrates a remarkable su- periority over other open-source LLMs that undergo instruction fine-tuning, showcasing a significant performance margin.
2306.08568#19
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
19
Learning Schemes for Modular System. Early modular models primarily employed end-to-end Reinforcement Learning (RL) to train each module’s planning and acting from scratch. While this approach is practical for lightweight models, RL can introduce substantial overhead for systems where each module is an LLM. Toolformer [25] proposes a self-supervised technique that optimizes planning requiring only a handful of demonstrations for each API. Specifically, Toolformer attempts various APIs to find successful examples and then fine-tunes the model. In contrast, we propose a straightforward mechanism in the multi-modal field, which can guide the system to retry and preserve the successful explorations as in-context examples. # 3 AssistGPT
2306.08640#19
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
19
LLM Chooses an Action Plan. In the final step, for each object, we prompt the LLM with the context Ci and a multiple choice question that lists different ways to tidy an object. The LLM is then instructed to choose the most socially appropriate option. The multiple choice options come from the MESSYSURFACES benchmark questions, a bank of 308 multiple-choice questions about how to clean up real-life objects found on messy surfaces. For example, in Fig. 1, the LLM chooses to leave the sports car as is because it infers that the sports car must be on display. To map the natural language action to robot behavior, we implement a series of hand-coded programmatic skill primitives that define an API the LLM can call into. See §5 for more details. # 4 The MESSYSURFACES Dataset
2306.08651#19
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
20
# 2.2.2 Commonsense Knowledge Graphs. Commonsense knowledge graphs formulate the knowledge about daily concepts, e.g., objects, and events, as well as their relationships [70]. Compared with encyclopedic knowledge graphs, commonsense knowledge graphs often model the tacit knowledge extracted from text such as (Car, UsedFor, Drive). ConceptNet [71] contains a wide range of commonsense concepts and relations, which can help computers understand the meanings of words people use. ATOMIC [72], [73] and ASER [74] focus on the causal effects between events, which can be used for commonsense rea- soning. Some other commonsense knowledge graphs, such as TransOMCS [75] and CausalBanK [76] are automatically constructed to provide commonsense knowledge. 7. https://ko.zhonghuapu.com/ 4 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY + Structural‘Fact + Domain-specific Knowledge * Symbolic-reasoning KG-related Tasks a. KG-enhanced LLMs Factual Knowledge General Knowledge Language Processing Generalizability a me So | none b. LLM-augmented KGs Knowledge Representation c. Synergized LLMs + KGs
2306.08302#20
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
20
Comparing with the Open-Source Models. In Table 1, we conduct a comprehensive comparison of our WizardCoder with other open-source models on the HumanEval and MBPP benchmarks. In contrast to the results presented in Figure 1, we adhere to the approach outlined in previous studies [31] by generating n samples for each problem to estimate the pass@1 score. The findings presented in Table 1 clearly demonstrate that our WizardCoder exhibits a substantial performance advantage over all the open-source models. From the experimental results in Figure 1 and Table 1, we have the following conclusions: 1. WizardCoder outperforms the largest closed-source LLMs, including Claude, Bard, PaLM, PaLM-2, and LaMDA, despite being significantly smaller. 6 Table 2: Performance of WizardCoder and baseline models on DS-1000. All models are evaluated with the same set of hyper-parameters: temperature=0.2, top_p=0.5, max_length=1024. Scores are average pass@1 accuracy over 40 samples. Matplotlib (plt) task does not have the right context, so insertion and completion scores are identical.
2306.08568#20
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
20
# 3 AssistGPT Overview. AssistGPT is a general multi-modal AI assistant system that can dynamically engage various tools in an interleaved language and code manner. Specifically, given a general language query and reference images or videos as inputs, the goal of AssistGPT is to generate the desired answer. As shown in Fig. 3, AssistGPT is achieved by cooperation with four core modules: (a) Planner, (b) Executor, (c) Inspector, and (d) Learner. The Planner § 3.1 aims to control the whole reasoning process, with the Executor § 3.2 supplying valuable feedback to Planner by executing external tools. The Inspector § 3.3 manages the input and intermediate results and assists the Planner in feeding proper content to the Executor. The Learner § 3.4 is capable of assessing the system performance and record successful explorations as in-context examples. In the following sections, we will go through each module in detail. 4
2306.08640#20
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
20
# 4 The MESSYSURFACES Dataset To assess a robot’s ability to reason socially in grounded environments, we introduce the MESSYSURFACES dataset. The dataset consists of images of 308 objects across 70 real-world surfaces that need to be cleaned. An average of 68% of objects are occluded in scene-level images1, so we also provide 5 close-up images as a way for the robot to “actively perceive” the object, see Fig. 2 for an example. MESSYSURFACES also includes a benchmark evaluation of multiple choice questions for each object where each option corresponds to different ways to tidy the object. Through a consensus of 5 human annotators, we determine which one of the choices is the most socially appropriate. To do well, a robot must reason about the socially appropriate way to clean each object from the images alone. Since no human preferences are given, the robot must identify relevant attributes of each object from the images (e.g., is the sports car built out of Legos or MEGA Bloks?) and then reason about how to tidy the object using this information. MESSYSURFACES contains 45 office desks, 4 bathroom counters, 5 bedroom tables, 8 kitchen counters, 4 living room tables and 4 dining tables.
2306.08651#20
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
21
Fig. 6. The general roadmap of unifying KGs and LLMs. (a.) KG-enhanced LLMs. (b.) LLM-augmented KGs. (c.) Synergized LLMs + KGs. TABLE 1 Representative applications of using LLMs and KGs. Name ChatGPT/GPT-4 ERNIE 3.0 Bard Firefly AutoGPT Copilot New Bing Shop.ai Wikidata KO OpenBG Doctor.ai Category Chat Bot Chat Bot Chat Bot Photo Editing AI Assistant Coding Assistant Web Search Recommendation Knowledge Base Knowledge Base Recommendation Health Care Assistant LLMs ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ KGs ✓ ✓ ✓ ✓ ✓ ✓ URL https://shorturl.at/cmsE0 https://shorturl.at/sCLV9 https://shorturl.at/pDLY6 https://shorturl.at/fkzJV https://shorturl.at/bkoSY https://shorturl.at/lKLUV https://shorturl.at/bimps https://shorturl.at/alCY7 https://shorturl.at/lyMY5 https://shorturl.at/sx238 https://shorturl.at/pDMV9 https://shorturl.at/dhlK0
2306.08302#21
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
21
Format Model plt np pd py scp sk tf # of problems: 155 220 291 68 106 115 45 Completion Completion Completion Code-Cushman-001 Completion Completion InCoder-6B CodeGen-mono StarCoder WizardCoder 28.3 31.7 40.7 51.7 55.2 4.4 10.9 21.8 29.7 33.6 3.1 3.4 7.9 11.4 16.7 4.4 7.0 12.4 21.4 26.2 2.8 9.0 11.3 20.2 24.2 2.8 10.8 18.0 29.5 24.9 3.8 15.2 12.2 24.5 26.7 Insertion Insertion Insertion InCoder-6B StarCoder WizardCoder 28.3 51.7 55.2 4.6 30.8 35.1 2.9 10.3 20.4 4.4 21.0 30.4 2.8 20.2 28.9 3.1 27.4 32.3 7.8 20.0 37.8 All 1,000 7.4 11.7 18.1 26.0 29.2 7.5 25.4 32.8
2306.08568#21
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
21
@ Planner @ Executor @ Inspector Si 7 {Cesstietnase (ore ate) eee eet vee Ansa th lloing esto ss best os Your : BIB) user [ool Set illustration] Validation a Text Detection Modul: Used for detect the text “ieee [In-Context Example] ss Question:... Thought: .. Acton: .. Observation no ves | | "deer D User (&)—»{ input Query: When was the video .. ? 2. Module Execution: Image/Keyframe Captioner _ & Images @ execu Inspector ‘Summary of Input Videos and Images: Error Message Grounding eed Mecca oe meer i weit 4079 seins dad enerion; | QO Yoone i <module> only Coasduoned Lola S tatesvideoss =| OFA GM Get .. "pe veo | ‘Thought: | need to find input. duration av (or: ee J Srey et Executor Observation: Output of a tool ARES ‘Crop image / Segment Video Inspector ‘Summary: visual-1: a 17 seconds video, .. p nage / Sem [.. repeat above N times until getting final answer] ‘Thought: | know the final answer [User (2)—Final Answer: ‘Summary of Metadata:
2306.08640#21
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
21
Data Collection Process. We recruited 51 participants to provide images of cluttered surfaces. Each participant was asked to pick 4 – 6 objects on a surface. They were then asked to take a photo of the scene- level view as well as close-up photos of each object from the top, right, left, front, and back angles – the offline equivalent of having a robot actively navigate a scene. The task took approximately 15−30 minutes. After receiving the photos, we post-processed each image and cropped out any identifiable information.
2306.08651#21
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
22
Chatbot. Firefly develops a photo editing application that allows users to edit photos by using natural language de- scriptions. Copilot, New Bing, and Shop.ai adopt LLMs to empower their applications in the areas of coding assistant, web search, and recommendation, respectively. Wikidata and KO are two representative knowledge graph applica- tions that are used to provide external knowledge. OpenBG [90] is a knowledge graph designed for recommendation. Doctor.ai develops a health care assistant that incorporates LLMs and KGs to provide medical advice. # 2.2.3 Domain-specific Knowledge Graphs Domain-specific knowledge graphs are often constructed to represent knowledge in a specific domain, e.g., medi- cal, biology, and finance [23]. Compared with encyclopedic knowledge graphs, domain-specific knowledge graphs are often smaller in size, but more accurate and reliable. For example, UMLS [77] is a domain-specific knowledge graph in the medical domain, which contains biomedical concepts and their relationships. In addition, there are some domain- specific knowledge graphs in other domains, such as finance [78], geology [79], biology [80], chemistry [81] and geneal- ogy [82]. # 2.2.4 Multi-modal Knowledge Graphs.
2306.08302#22
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
22
ae 848 ee 57.2 ac Oe s565 fe 1 a 527 50 51 52 53 54 55 56 57 58 Pass@1 on HumanEval Figure 2: Ablation study on the number of data evolution rounds. 2. WizardCoder outperforms all the open-source Code LLMs by a large margin (+22.3 on HumanEval), including StarCoder, CodeGen, CodeGee, and CodeT5+. 3. WizardCoder significantly outperforms all the open-source Code LLMs with instructions fine-tuning, including InstructCodeT5+, StarCoder-GPTeacher, and Instruct-Codegen-16B. # 4.4 Evaluation on DS-1000
2306.08568#22
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
22
nage / Sem [.. repeat above N times until getting final answer] ‘Thought: | know the final answer [User (2)—Final Answer: ‘Summary of Metadata: visual-1: a 17 seconds video, segmented video from visual-0, dense subtitle, target clip for query "When was the video 2. In-Context Memory Bank: Save as in-context example for better planning and module prompts Summary: visual-0, a 48.27 seconds video, sparse subtitle, user provided video, a toy "(train on the floor if success evaluate
2306.08640#22
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
22
Benchmark Evaluation. The benchmark questions consist of 5 LLM-generated multiple choice options about how to manipulate each object to clean the surface in a socially appropriate manner. To make the options diverse, we asked the LLM to first identify 5 states the object could be in and then queried it to come up with a cleaning action for each of those states (see Fig. 2 for an example). For each question, we recruited 5 annotators to choose the correct state-action pair based on the scene and close-up images of the object. Annotators were also given an option to indicate if none of the choices were a good fit. We used the majority label as our answer and omitted 16 questions (out of 324) where a majority thought none of the choices were a good fit. For questions that had two equally popular answers, we counted both as correct. Our annotators agreed on average 67% of the time. To evaluate the quality of our multiple choice options, we asked annotators to rate how appropriate each cleaning action is for each object state. Annotators gave each option an average rating of 4.1 out of 5. The average rating for the correct option was 4.4 out of 5. Annotators. In total, we recruited 350 annotators from Prolific. Each annotator
2306.08651#22
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
23
# 2.2.4 Multi-modal Knowledge Graphs. 3 ROADMAP & CATEGORIZATION In this section, we first present a road map of explicit frameworks that unify LLMs and KGs. Then, we present the categorization of research on unifying LLMs and KGs. # 3.1 Roadmap The roadmap of unifying KGs and LLMs is illustrated in Fig. 6. In the roadmap, we identify three frameworks for the unification of LLMs and KGs, including KG-enhanced LLMs, LLM-augmented KGs, and Synergized LLMs + KGs. The KG-enhanced LLMs and LLM-augmented KGs are two parallel frameworks that aim to enhance the capabilities of LLMs and KGs, respectively. Building upon these frame- works, Synergized LLMs + KGs is a unified framework that aims to synergize LLMs and KGs to mutually enhance each other.
2306.08302#23
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
23
# 4.4 Evaluation on DS-1000 The DS-1000 benchmark [34] comprises 1,000 distinct data science workflows spanning seven libraries. It assesses the performance of code generations against test cases and supports two evaluation modes: completion and insertion. In our experiments, we only report insertion scores for models that support. The DS-1000 benchmark further classifies problems based on the libraries employed, including Matplotlib (plt), NumPy (np), Pandas (pd), SciPy (scp), Scikit-Learn (sk), PyTorch (py), and TensorFlow (tf). We follow the same prompt format as StarCoder. In Table 2, we present pass@1 (n=40) results for each library, along with an overall score. Based on these results, our conclusion is that WizardCoder demonstrates a significant superiority over all other models when tackling data science problems on the DS-1000 benchmark. This observation holds true across nearly all data science libraries. # 4.5 Ablation Study
2306.08568#23
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
23
Figure 3: Diagrammatic illustration of AssistGPT system. It consists of four core modules: @ Planner: control the whole reasoning process; ® Executor: execute external tool and return feedback to Planner; ® Inspector: manage the input and intermediate outcomes; ® Learner: assess the system performance and record successful trials as in-context examples. # 3.1 Planner The @ Planner employs a highly intelligent LLM i.e., GPT-4 [52] as the central brain to control the global reasoning planning. It begins the planning process by taking inputs from three types of in- formation: an Instruction Prompt consisting of the [Tool Set Illustration] and [In-Context Example], Input Query , and the Summary of Visual Inputs created by @ Inspector.
2306.08640#23
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08302
24
Unlike conventional knowledge graphs that only contain textual information, multi-modal knowledge graphs repre- sent facts in multiple modalities such as images, sounds, and videos [83]. For example, IMGpedia [84], MMKG [85], and Richpedia [86] incorporate both the text and image information into the knowledge graphs. These knowledge graphs can be used for various multi-modal tasks such as image-text matching [87], visual question answering [88], and recommendation [89]. # 2.3 Applications LLMs as KGs have been widely applied in various real-world applications. We summarize some representa- tive applications of using LLMs and KGs in Table 1. ChatGPT/GPT-4 are LLM-based chatbots that can commu- nicate with humans in a natural dialogue format. To im- prove knowledge awareness of LLMs, ERNIE 3.0 and Bard incorporate KGs into their chatbot applications. Instead of # 3.1.1 KG-enhanced LLMs
2306.08302#24
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
24
# 4.5 Ablation Study Figure 2 presents an ablation study investigating the impact of the number of data evolution rounds. The first round of evolved data contains 38k samples. The second round contains 58k. The third round contains 78k. The fourth round contains 98k. For consistency, all models undergo fine-tuning with 200 steps. The results reveal that the highest pass@1 score on humaneval is achieved after three rounds of data evolution. Based on this observation, we select the data that evolved during the third round as the ultimate dataset. 7 # 4.6 Examples Table 3 showcases examples of interactions with our WizardCoder. The examples demonstrate that our model consistently generates accurate responses accompanied by clear explanations. # 5 Conclusion and Future Work This paper introduces WizardCoder, a Code Evol-Instruct fine-tuned Code LLM. The experimental results demonstrate that WizardCoder achieves SOTA performance surpassing all existing open-source Code LLMs on four widely recognized code generation benchmarks: HumanEval, HumanEval+, MBPP, and DS-1000. Furthermore, WizardCoder exhibits superior performance compared to the largest closed LLMs, including Anthropic’s Claude and Google’s Bard.
2306.08568#24
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
24
Then it generates the appropriate output for the next step, which consist of two parts: Thought: a language phrase indicates what should be done next. While it doesn’t affect the module or API call directly, it aids the LLM planning procedure. Action: a structural string obeys the pre-defined template provided in the instructions. It specifies which external tool to call and what arguments to input, e.g., After each time © Executor call to an external tool, the tool returns outputs in the form of natural language, which we refer to as Observation. If the tool generates an intermediate outcome, e.g., a segmented video, our @ Inspector will store it and generate a Summary for it. Both the Observation and Summary will be fed to the @ Planner to guide the planning of the next step. The following sections will introduce more details of Action, Observation , and Summary . Currently, we integrate 13 functional tools in AssistGPT to power multi-modal assistance, as shown in Tab. 1. These modules can be mainly categorized into three types:
2306.08640#24
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]
2306.08651
24
4 Oracle VLM Zero-Shot VLM 061° + + ° 061 * + « ° * —*— Oracle —e— Ours-LLM fo —*— Ours-Front S A Baseline B04 04 —*~ Questions g < No Active ~*~ Perception No 02 02 Questions 1 2 3 4 5 1 2 3 4 5 Iterations Iterations Figure 3: MESSYSURFACES Benchmark Accuracy. For both the Oracle VLM and InstructBLIP, on average, our approach outperforms all baselines on the MESSYSURFACES benchmark. Accuracy is given by the percentage by which our framework selects the most appropriate (as indicated by our annotators) way to tidy each object. # 5 Experiments We examine how well our approach can perform grounded social reasoning on the MESSYSURFACES dataset as well as a real-world robotic system. Primary Metric. We use accuracy on the benchmark questions as our primary metric. Each benchmark question presents 5 options on how to tidy the object, with accuracy defined as the percentage by which our framework selects the most appropriate option (as indicated by our annotators). Baselines. Key to our approach ( Ours-LLM) is the ability to supplement missing information by asking questions and actively perceiving the environment. To evaluate this, we compare the following:
2306.08651#24
Toward Grounded Social Reasoning
Consider a robot tasked with tidying a desk with a meticulously constructed Lego sports car. A human may recognize that it is not socially appropriate to disassemble the sports car and put it away as part of the "tidying". How can a robot reach that conclusion? Although large language models (LLMs) have recently been used to enable social reasoning, grounding this reasoning in the real world has been challenging. To reason in the real world, robots must go beyond passively querying LLMs and *actively gather information from the environment* that is required to make the right decision. For instance, after detecting that there is an occluded car, the robot may need to actively perceive the car to know whether it is an advanced model car made out of Legos or a toy car built by a toddler. We propose an approach that leverages an LLM and vision language model (VLM) to help a robot actively perceive its environment to perform grounded social reasoning. To evaluate our framework at scale, we release the MessySurfaces dataset which contains images of 70 real-world surfaces that need to be cleaned. We additionally illustrate our approach with a robot on 2 carefully designed surfaces. We find an average 12.9% improvement on the MessySurfaces benchmark and an average 15% improvement on the robot experiments over baselines that do not use active perception. The dataset, code, and videos of our approach can be found at https://minaek.github.io/groundedsocialreasoning.
http://arxiv.org/pdf/2306.08651
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
cs.RO, cs.AI
null
null
cs.RO
20230614
20230614
[ { "id": "1606.06565" }, { "id": "2008.02275" }, { "id": "2303.00001" }, { "id": "2305.06500" }, { "id": "2201.05320" } ]
2306.08302
25
# 3.1.1 KG-enhanced LLMs LLMs are renowned for their ability to learn knowledge from large-scale corpus and achieve state-of-the-art per- formance in various NLP tasks. However, LLMs are often criticized for their hallucination issues [15], and lacking of interpretability. To address these issues, researchers have proposed to enhance LLMs with knowledge graphs (KGs). KGs store enormous knowledge in an explicit and struc- tured way, which can be used to enhance the knowledge awareness of LLMs. Some researchers have proposed to incorporate KGs into LLMs during the pre-training stage, which can help LLMs learn knowledge from KGs [35], [91]. Other researchers have proposed to incorporate KGs into LLMs during the inference stage. By retrieving knowledge from KGs, it can significantly improve the performance of LLMs in accessing domain-specific knowledge [92]. To improve the interpretability of LLMs, researchers also utilize 5 JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
2306.08302#25
Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.
http://arxiv.org/pdf/2306.08302
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
cs.CL, cs.AI
A short version of this paper was accepted by IEEE Transactions on Knowledge and Data Engineering (TKDE)
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
cs.CL
20230614
20240125
[ { "id": "2309.01538" }, { "id": "2302.13971" }, { "id": "2110.08173" }, { "id": "2203.16747" }, { "id": "2201.05337" }, { "id": "2302.12095" }, { "id": "1810.04805" }, { "id": "2305.13168" }, { "id": "2305.12392" }, { "id": "2206.14268" }, { "id": "2111.08546" }, { "id": "2212.10511" }, { "id": "2107.02137" }, { "id": "2105.10311" }, { "id": "2308.09729" }, { "id": "2310.02129" }, { "id": "1910.12840" }, { "id": "2206.13163" }, { "id": "2303.11146" }, { "id": "2009.02835" }, { "id": "2205.07424" }, { "id": "1909.03193" }, { "id": "2010.15980" }, { "id": "2007.00655" }, { "id": "2203.11171" }, { "id": "2306.06427" }, { "id": "2305.08281" }, { "id": "2104.08696" }, { "id": "2110.08455" }, { "id": "2305.09645" }, { "id": "2310.01061" }, { "id": "2308.03688" }, { "id": "2305.01157" }, { "id": "2310.08975" }, { "id": "2301.08913" }, { "id": "2305.13172" }, { "id": "2212.13428" }, { "id": "2303.10368" }, { "id": "2307.07697" }, { "id": "2308.11730" }, { "id": "2108.01928" }, { "id": "2010.00711" }, { "id": "2304.10592" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2307.12976" }, { "id": "2309.03118" }, { "id": "2304.13712" }, { "id": "2212.01588" }, { "id": "2309.01219" }, { "id": "2302.04023" }, { "id": "2202.08772" }, { "id": "2208.02743" }, { "id": "2201.08239" }, { "id": "2201.08531" }, { "id": "2302.05019" }, { "id": "2003.10555" }, { "id": "1907.11692" }, { "id": "2201.04843" }, { "id": "2206.12617" }, { "id": "2201.05575" }, { "id": "2310.07984" } ]
2306.08568
25
Future Work. Although our WizardCoder demonstrates impressive coding performance, as de- picted in Figure 1, our model still falls significantly behind the SOTA LLM, GPT4. Therefore, future work will prioritize the enhancement of the Code Evol-Instruct method to further augment the performance of our model. Broader Impact. Similar to the other LLMs, our WizardCoder could also generate unethical, harmful, or misleading information. Therefore, future research to address the ethical and societal implications is needed. 8 # Table 3: Examples of interaction with our WizardCoder. # Instruction # Response
2306.08568#25
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM
http://arxiv.org/pdf/2306.08568
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
cs.CL, cs.AI
Large Language model, Code Generation, Code LLMs
null
cs.CL
20230614
20230614
[ { "id": "2305.06161" }, { "id": "2304.12244" }, { "id": "2212.10560" } ]
2306.08640
25
Currently, we integrate 13 functional tools in AssistGPT to power multi-modal assistance, as shown in Tab. 1. These modules can be mainly categorized into three types: • Descriptor: To effectively comprehend and utilize the data derived from intricate multimodal environments e.g., image, video, audio, and text, we employ a variety of fundamental models as basic descriptors for perception. These models, including (a) Image Caption, (b) Video Narration, (c) Object Detection, (d) Text Detection, and (e) ASR Translation, enable us to extract enough information from diverse source, thus enhancing our understanding of the multimodal sceanrio. 2The successful trials recorded by Learner, will be introduced later. 5 Table 1: Module used in AssistGPT. A module may have different models, separated by a slash (/).
2306.08640#25
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Recent research on Large Language Models (LLMs) has led to remarkable advancements in general NLP AI assistants. Some studies have further explored the use of LLMs for planning and invoking models or APIs to address more general multi-modal user queries. Despite this progress, complex visual-based tasks still remain challenging due to the diverse nature of visual tasks. This diversity is reflected in two aspects: 1) Reasoning paths. For many real-life applications, it is hard to accurately decompose a query simply by examining the query itself. Planning based on the specific visual content and the results of each step is usually required. 2) Flexible inputs and intermediate results. Input forms could be flexible for in-the-wild cases, and involves not only a single image or video but a mixture of videos and images, e.g., a user-view image with some reference videos. Besides, a complex reasoning process will also generate diverse multimodal intermediate results, e.g., video narrations, segmented video clips, etc. To address such general cases, we propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate LLMs with various tools. Specifically, the Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress. Inspector is an efficient memory manager to assist the Planner to feed proper visual information into a specific tool. Finally, since the entire reasoning process is complex and flexible, a Learner is designed to enable the model to autonomously explore and discover the optimal solution. We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results. Moreover, showcases demonstrate the ability of our system to handle questions far more complex than those found in the benchmarks.
http://arxiv.org/pdf/2306.08640
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
cs.CV
Project page: https://showlab.github.io/assistgpt/
null
cs.CV
20230614
20230628
[ { "id": "2302.13971" }, { "id": "2304.02643" }, { "id": "1810.04805" }, { "id": "2305.06355" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2201.11903" }, { "id": "2204.00598" }, { "id": "2212.00280" }, { "id": "2112.08614" }, { "id": "2304.04370" }, { "id": "2107.03374" }, { "id": "2303.16434" }, { "id": "2303.08128" }, { "id": "2303.03378" }, { "id": "2205.10747" }, { "id": "2303.05499" }, { "id": "2211.09699" }, { "id": "2205.14100" }, { "id": "2212.09522" }, { "id": "2303.11381" }, { "id": "2301.12597" }, { "id": "2210.03629" }, { "id": "2303.04671" }, { "id": "2212.04356" }, { "id": "2303.17580" }, { "id": "1908.07490" }, { "id": "2304.09842" }, { "id": "2208.12037" }, { "id": "2209.10918" }, { "id": "2305.06500" }, { "id": "2211.11559" } ]