doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.08640
| 52 |
⢠Object Detection: The main function of this module is to determine whether the image contains the objects mentioned in the query and to address counting-related questions. Thus, it contains an open-set object detection model, e.g., Grounding DINO [64], which can output the bounding boxes of relevant objects based on the query. We also let the module calculate the number of related objects.
⢠Text Detection: This model is used to extract OCR from images, and the extracted text is returned to the Planner. We use Google OCR to achieve this purpose.
ASR Translation: This model is used to convert audio from a video into text. We use Ope- nAIâs open-source ASR (Automatic Speech Recognition) model, Whisper [65], to accomplish this. The detected ASR organizes timestamps and text in a manner similar to subtitles. In the implementation, we automatically run this module as soon as we receive a video with audio. ⢠Region Ground: The purpose of this module is to ï¬nd a speciï¬c area of an image based on the
query. We use the OFA-Large [66], which is ï¬ne-tuned on RefCOCO, to achieve it.
|
2306.08640#52
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 52 |
[53] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65 (1):99â106, 2021.
[54] J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik. LERF: Language Embedded Radiance Fields, Mar. 2023.
[55] M. A. Research. pytorch Polymetis: https://github.com/facebookresearch/fairo/tree/main/polymetis, 2021â2023. A real-time controller manager.
[56] J. Carpentier, G. Saurel, G. Buondonno, J. Mirabel, F. Lamiraux, O. Stasse, and N. Mansard. The pinocchio c++ library â a fast and flexible implementation of rigid body dynamics algorithms and their analytical derivatives. In IEEE International Symposium on System Integrations (SII), 2019.
|
2306.08651#52
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 53 |
Instead of probing the general knowledge by using the encyclopedic and commonsense knowledge graphs, BioLAMA [129] and MedLAMA [120] probe the medical knowledge in LLMs by using medical knowledge graphs. Alex et al. [130] investigate the capacity of LLMs to re- tain less popular factual knowledge. They select unpopular facts from Wikidata knowledge graphs which have low- frequency clicked entities. These facts are then used for the evaluation, where the results indicate that LLMs encounter difficulties with such knowledge, and that scaling fails to appreciably improve memorization of factual knowledge in the tail.
# 4.4.2 KGs for LLM Analysis
|
2306.08302#53
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 53 |
Narration Ground: This modelâs function is to ï¬nd time segments related to the query based on the videoâs narration. We propose two implementations: 1) We use GPT-4 [5], taking the videoâs narration and query as prompts, to output the timestamps of the time segments. 2) Another solution is using CLIP [67] to do that. We can split the video into several segments, and calculate the similarity between the frame in each segment and query. The time stamps of the segment with the highest similarity will be outputted. In our preliminary experiments, the ï¬rst solution showed better interpretability and generalization ability, so it was adopted in the benchmark evaluation. ⢠Text Ground: The purpose of this model is to locate speciï¬c areas of an image that correspond to a certain text. This capability can guide users in identifying crucial information in complex, text- rich images, such as user interfaces. The query format is text[:object_name], wherein text signiï¬es the text to be located, and object_name (which is optional) is used to locate the text on a speciï¬c object, for instance, "menu: button".
|
2306.08640#53
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 53 |
[57] J. Carpentier, F. Valenza, N. Mansard, et al. Pinocchio: fast forward and inverse dynamics for poly-articulated systems. https://stack-of-tasks.github.io/pinocchio, 2015â2023.
11
# A Data Collection
Our data collection consists of three components:
1. Collecting the MESSYSURFACES dataset photos.
2. Asking crowdworkers to choose the most socially appropriate action in our benchmark questions.
3. Asking crowdworkers to evaluate parts of our framework.
# A.1 Survey Interface
We show the survey interface we used to complete the 2nd and 3rd crowdsourcing components below:
Object 1/5: *mug* Part 2/4 Part 1/4
Figure 7: Parts 1 and 2 of Survey Interface.
3. Answer the Question Part 3/4 Part 4/4
Figure 8: Parts 3 and 4 of Survey Interface.
|
2306.08651#53
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 54 |
# 4.4.2 KGs for LLM Analysis
Knowledge graphs (KGs) for pre-train language models (LLMs) analysis aims to answer the following questions such as âhow do LLMs generate the results?â, and âhow do the function and structure work in LLMs?â. To analyze the inference process of LLMs, as shown in Fig. 13, KagNet [38] and QA-GNN [131] make the results generated by LLMs at each reasoning step grounded by knowledge graphs. In this way, the reasoning process of LLMs can be explained by extracting the graph structure from KGs. Shaobo et al. [123] investigate how LLMs generate the results correctly. They adopt the causal-inspired analysis from facts extracted from KGs. This analysis quantitatively measures the word patterns that LLMs depend on to generate the results. The results show that LLMs generate the missing factual more by the positionally closed words rather than the knowledge- dependent words. Thus, they claim that LLMs are inade- quate to memorize factual knowledge because of the inaccu- rate dependence. To interpret the training of LLMs, Swamy
10
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
# TABLE 3 Summary of representative LLM-augmented KG methods.
|
2306.08302#54
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 54 |
text to be located, and object_name (which is optional) is used to locate the text on a speciï¬c object, for instance, "menu: button". Speciï¬cally, the model operates in two stages: 1) Based on the Optical Character Recognition (OCR) detection results, the model identiï¬es areas of the image that match the text segment of the query. This is achieved by calculating the distance between the query and the OCR extracted, and when the edit distance is below a particular threshold, it is considered a match. 2) If more than one textual area is identiï¬ed, we further reï¬ne the results based on the objectâs name. We employ the Semantic Segment Anything (SSA) [63] to segment the image semantically, identifying regions that match the objectâs name mentioned in the query.
|
2306.08640#54
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 54 |
3. Answer the Question Part 3/4 Part 4/4
Figure 8: Parts 3 and 4 of Survey Interface.
The survey consists of a set of questions that we ask about each object, with a single page per object. An example page for the âmugâ object is shown in Fig. 7 and Fig. 8. The first part of the survey asks users to rate the follow-up questions generated by the LLM; results are reported in Section 5 â Experiments in main body of the paper, under âDoes the LLM Ask Good Follow-Up Questions?â The second part of the survey asks users to evaluate the informativeness of each close-up angle; results are also reported in Section 5, under âDoes the LLM Suggest Informative Close-Up Angles?â The third part of the survey asks users to give ground truth answers to the follow-up questions based on all six images collected of the object; these answers are used as the Oracle VLM when evaluating our framework. The final part of the survey asks users to evaluate the appropriateness of each multiple choice option in the MESSYSURFACES benchmark and asks them to indicate the most socially appropriate way to tidy the object. These results are used to determine the correct answer for our benchmark questions as described in Section 4 of the main paper. We designed our survey using Flask and Python and hosted it on an AWS server.
12
|
2306.08651#54
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 55 |
Task Method Year LLM Technique LLM-augmented KG embedding Pretrain-KGE [94] KEPLER [40] Nayyeri et al. [132] Huang et al. [133] CoDEx [134] 2020 2020 2022 2022 2022 E E E E E LLMs as Text Encoders LLMs as Text Encoders LLMs as Text Encoders LLMs as Text Encoders LLMs as Text Encoders LMKE [135] kNN-KGE [136] LambdaKG [137] 2022 2022 2023 E LLMs for Joint Text and KG Embedding E LLMs for Joint Text and KG Embedding E + D + ED LLMs for Joint Text and KG Embedding KG-BERT [26] MTL-KGC [138] PKGC [139] LASS [140] 2019 2020 2022 2022 E E E E Joint Encoding Joint Encoding Joint Encoding Joint Encoding LLM-augmented KG completion MEM-KGC [141] OpenWorld KGC [142] 2021 2023 E E MLM Encoding MLM Encoding StAR [143] SimKGC [144] LP-BERT [145] 2021 2022 2022 E E E Separated Encoding
|
2306.08302#55
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 55 |
⢠Subtitle Ground: This model is similar to the narration grounding model, but it uses the videoâs subtitles as input instead of the narration. Thus, we also use GPT-4 to achieve it.
⢠Knowledge Reason: The purpose of this model is to enable the model to apply external knowl- edge to answer questions. We currently do not connect to the internet to retrieve knowledge, but use the knowledge that GPT-4 has itself learned. Speciï¬cally, this model enables GPT-4 to use its own knowledge to infer the answer based on the question and results of all previous reasoning steps.
⢠Narration Reason: The aim of this module is to infer some information based on the visual content of the video. This module also uses GPT-4, taking the query and the input videoâs narration as prompts, to infer the answer.
⢠Subtitle Reason: The aim of this module is to infer some information based on the subtitle of the video. It is similar to Narration Reason, but takes the input videoâs subtitle and query as prompts, to infer the answer.
|
2306.08640#55
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 55 |
12
# A.2 Prolific Details
We recruited crowdworkers from Prolific to complete our study. The study took an average of 10 minutes and each crowdworker was paid $2 ($12/hour). We required workers to be fluent English speakers, based in the U.S. or U.K., and have a minimum approval rating of 98%. Each worker was in charge of answering survey questions about all objects belonging to a desk. We have a total of 70 desks and ran our framework 5 times, resulting in the recruitment of 350 Prolific workers.
B Framework Implementation Details In this section, we describe the design choices and implementation details of our framework.
Generating the Initial Description. In the first step of our framework (Section 3 in the main paper), we generate an initial description of the scene and append it to our context C0. The initial description is a list of all the objects in the scene. To ensure that we list the objects accurately, we generate the initial description using ground truth names of objects (see Listing 1 for an example).
1 """ 2 These are the objects on the desk : 3 4 """ â scrunchie â , â lotion â , â vaseline â , â brush â.
Listing 1: Example Initial Description
|
2306.08651#55
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 56 |
MLM Encoding MLM Encoding StAR [143] SimKGC [144] LP-BERT [145] 2021 2022 2022 E E E Separated Encoding Separated Encoding Separated Encoding GenKGC [96] KGT5 [146] KG-S2S [147] AutoKG [93] 2022 2022 2022 2023 D ED ED ED LLM as decoders LLM as decoders LLM as decoders LLM as decoders ELMO [148] GenerativeNER [149] LDET [150] BOX4Types [151] ELQ [152] ReFinED [153] 2018 2021 2019 2021 2020 2022 E ED E E E E Named Entity Recognition Named Entity Recognition Entity Typing Entity Typing Entity Linking Entity Linking LLM-augmented KG construction BertCR [154] Spanbert [155] CDLM [156] CrossCR [157] CR-RL [158] 2019 2020 2021 2021 2021 E E E E E CR (Within-document) CR (Within-document) CR (Cross-document) CR (Cross-document) CR (Cross-document) SentRE [159] Curriculum-RE [160] DREEAM [161] 2019 2021 2023 E E E RE
|
2306.08302#56
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 56 |
⢠Temporal Reason: This model is able to ï¬nd a video clip based on some temporal relation words. The input to this module follows the following format: temporal_word: time stamps, e.g., after: 3 - 6. Temporal relation words include two types, one is absolute temporal relation words, such as in the middle/beginning/end of the video. The second type is relative temporal relation words,
12
such as before and after. For the ï¬rst type of words, we divide the video into 5 segments and then output the time stamps of the corresponding segment according to the temporal_word. For the second type, we divide the video into 8 segments, and then, according to the input time stamps, we output the time stamps of the segment before or after it. The current hyperparameters, the division of video clips, are still preliminary. It would be much better to use the model to divide them semantically, and then perform temporal reasoning in the future.
# C Qualitative Results in A-OKVQA
In Figure 5, we showcase a successful instance along with several failure examples, illustrating the most frequent error patterns in A-OKVQA. As is evident, AssistGPT can produce highly interpretable answer processes. Moreover, even in cases where the questions are answered incorrectly, there are relatively reasonable explanations provided. In the following, we illustrate the common error patterns in detail:
|
2306.08640#56
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 56 |
Listing 1: Example Initial Description
Structuring Follow-Up Questions. In the second step of our framework, we prompt an LLM to generate follow-up questions about information that it is missing in its context. We structure our follow-up questions to be yes-or-no questions where the LLM also has an option to choose âCannot answer from imageâ. We choose a yes-or-no question format to make it easier to evaluate the VLMâs answers to these question. See §F.1 for the actual prompts used for the LLM.
Eliciting Informative Close-Up Angles from an LLM. In the third step of our framework, we prompt an LLM to generate informative close-up angles that guide a photo-taking robot. We restrict the close-up angles the LLM can choose to a set of 5 angles: <FRONT>, <BACK>, <LEFT>, <RIGHT>, <TOP>. When querying the LLM, we format the prompt as a multiple choice question where the options are the five specified angles. See §F.1 for further prompting details.
|
2306.08651#56
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 57 |
CR (Cross-document) CR (Cross-document) SentRE [159] Curriculum-RE [160] DREEAM [161] 2019 2021 2023 E E E RE (Sentence-level) RE (Sentence-level) RE (Document-level) Kumar et al. [95] Guo et al. [162] Grapher [41] PiVE [163] 2020 2021 2021 2023 D + ED E E ED End-to-End Construction End-to-End Construction End-to-End Construction End-to-End Construction COMET [164] BertNet [165] West et al. [166] 2019 D E 2022 2022 D Distilling KGs from LLMs Distilling KGs from LLMs Distilling KGs from LLMs LLM-augmented KG-to-text Generation Ribeiro et al [167] JointGT [42] FSKG2Text [168] GAP [169] 2021 2021 2021 D + ED 2022 ED ED ED Leveraging Knowledge from LLMs Leveraging Knowledge from LLMs Leveraging Knowledge from LLMs Leveraging Knowledge from LLMs GenWiki [170] KGPT [171] 2020 2020 - ED Constructing KG-text aligned Corpus Constructing KG-text aligned Corpus Lukovnikov et al.
|
2306.08302#57
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 57 |
⢠Undesired output format: For Direct Answer questions, like Q2, the results of AssistGPT are the same as the correct answers in meaning, but the expression is different, which would be considered as incorrect under the existing metrics.
⢠Fine-grained recognition: The recognition of ï¬ne-grained categories of some objects is still not well done by existing visual models, resulting in the incorrect ï¬nal answer. For example, AssistGPT didnât successfully recognize cough drops in Q3.
Pose-to-text: Currently, there are very few models that can map the ï¬ne-grained pose or actions of people or animals to natural language. For example, capturing the upward jump action of the cat in Q4 is a challenge. AssistGPT currently does not incorporate a related model to grasp such information. Instead, it makes prediction based on the surrounding objects in relation to the cat. ⢠Inconsistent reasoning: Despite AssistGPT having some self-error correction mechanisms, it occasionally exhibits inconsistencies in its reasoning process, which can lead to ï¬nal inaccuracies. For instance, in Q5, the model initially identiï¬es the orange vehicle as a truck, but in subsequent steps, it is referred to as a shuttle bus. Unfortunately, AssistGPT fails to detect this inconsistency and does not proceed to make necessary corrections.
# D In-the-wild Prediction Examples
|
2306.08640#57
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 57 |
C Real-World Robot Evaluation When implementing our grounded social reasoning system on physical robot hardware, there are two operating modes, reflecting the active perception and skill execution components of our approach respectively. As a preliminary, for the real-robot experiments, we assume that the object poses (in the coordinate frame of the robotâs end-effector) are known a priori. While in this work we assume these poses are hand-specified by an expert, one could also use off-the-shelf perception systems that predict 6-DoF object poses or bounding boxes directly, as in prior work [36].
|
2306.08651#57
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 58 |
GenWiki [170] KGPT [171] 2020 2020 - ED Constructing KG-text aligned Corpus Constructing KG-text aligned Corpus Lukovnikov et al. [172] Luo et al. [173] QA-GNN [131] Nan et al. [174] 2019 2020 2021 2023 E Entity/Relation Extractor E Entity/Relation Extractor E Entity/Relation Extractor E + D + ED Entity/Relation Extractor LLM-augmented KGQA DEKCOR [175] DRLK [176] OreoLM [177] GreaseLM [178] ReLMKG [179] UniKGQA [43] 2021 2022 2022 2022 2022 2023 E E E E E E Answer Reasoner Answer Reasoner Answer Reasoner Answer Reasoner Answer Reasoner Answer Reasoner
|
2306.08302#58
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 58 |
# D In-the-wild Prediction Examples
We show some examples of AssistGPT handling in-the-wild scenarios in Figure 6 and Figure 7. From various in-the-wild examples, itâs clear that AssistGPT can adeptly handle a range of video types, be it dense, subtitled instructional videos (Q2, Q3), or those featuring rich, visual content with sporadic on-frame text (Q1, Q4, Q5). Impressively, when faced with high-level queries (Q2 and Q3), the model exhibits a capacity to strategically locate useful content, accurately identify the correct responses, and offer comprehensive, multimodal answers. A notable self-error correction capability is also evident during its reasoning process, as demonstrated in Q2. Here, the narration model was unable to generate meaningful narrations and, therefore, opted to utilize the subtitle to answer the question.
Moreover, in Q5, we highlight that our model can effectively process multiple video inputs serving different functions. This includes a User view image and a couple of reference videos. Itâs important to note that our model can accommodate any number of inputs. Consequently, with the incorporation of a YouTube video search function, the model could autonomously seek out several reference videos and then cross-reference them to discern the userâs intent.
|
2306.08640#58
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 58 |
Active Perception Primitives. The active perception component of our framework requires the robot to exe- cute on two types of behaviors, which we codify as functional primitives move <direction>(<object>) and take photo(). While the latter behavior is well-defined (capture an image at the robotâs current position), the directional movement primitives vary per-object. As each object in our experiments is of different scale and composition, we manually define a set of pose transformations pdir â SE(3) for each object and direction <FRONT>, <BACK>, <LEFT>, <RIGHT>, <TOP>. Given this dictionary of pose offsets, we implement move direction for a specified object and desired direction by planning and executing a min-jerk trajectory from the robotâs current location to the resulting pose after applying pdir to the known objectâs pose.
|
2306.08651#58
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 59 |
E: Encoder-only LLMs, D: Decoder-only LLMs, ED: Encoder-decoder LLMs.
et al. [122] adopt the language model during pre-training to generate knowledge graphs. The knowledge acquired by LLMs during training can be unveiled by the facts in KGs explicitly. To explore how implicit knowledge is stored in parameters of LLMs, Dai et al. [39] propose the concept of knowledge neurons. Specifically, activation of the identified knowledge neurons is highly correlated with knowledge expression. Thus, they explore the knowledge and facts represented by each neuron by suppressing and amplifying knowledge neurons.
# 5 LLM-AUGMENTED KGS
Knowledge graphs are famous for representing knowledge in a structural manner. They have been applied in many downstream tasks such as question answering, recommen- dation, and web search. However, the conventional KGs are often incomplete and existing methods often lack con- sidering textual information. To address these issues, re- cent research has explored integrating LLMs to augment KGs to consider the textual information and improve the performance in downstream tasks. In this section, we will introduce the recent research on LLM-augmented KGs. We will introduce the methods that integrate LLMs for KG embedding, KG completion, KG construction, KG-to-text generation, and KG question answering, respectively. Rep- resentative works are summarized in Table 3.
|
2306.08302#59
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08651
| 59 |
Implementing Object-Centric Manipulation Skills. Similar to the perception primitives, we define each manipulation skill on a per-object basis as well; this is both due to the variety in object scale and properties, but also due to the variance in grasp locations for different desired behaviors. For example, the location where the robot should grasp an object such as a soda can may differ greatly depending on whether we want to throw the soda can away into a recycling bin (in which case the robot should grasp the soda can across the top), or if we want to relocate the can to a shelf (in which case the robot should grasp the soda can along the side, to aid in insertion). To formalize this, we define a fixed interface depicted in Fig. 9. The provided API defines functions for each skill â for example, relocate() and cleanup() â at the object-level, with a stateful function set designated() that provides a compositional way to set target locations (i.e., ârecep- taclesâ). Fig. 9 (Right) shows the actual invoked API calls for the Kitchen Cleanup Desk depicted in Fig. 15.
13
|
2306.08651#59
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 60 |
: KGE Models âAn American astronaut and aeronautical engineer. | Asmall city in Ohio, USA. | Text), x Text,, t Text; ( | Neil Armstrong ; , | Wapakoneta ) h t
Fig. 14. LLMs as text encoder for knowledge graph embedding (KGE).
# 5.1 LLM-augmented KG Embedding
Knowledge graph embedding (KGE) aims to map each entity and relation into a low-dimensional vector (embed- ding) space. These embeddings contain both semantic and structural information of KGs, which can be utilized for various tasks such as question answering [180], reasoning [38], and recommendation [181]. Conventional knowledge graph embedding methods mainly rely on the structural information of KGs to optimize a scoring function de- fined on embeddings (e.g., TransE [33], and DisMult [182]). However, these approaches often fall short in representing unseen entities and long-tailed relations due to their limited structural connectivity [183], [184]. To address this issue, as shown in Fig. 14, recent research adopts LLMs to enrich representations of KGs by encoding the textual descriptions of entities and relations [40], [94].
# 5.1.1 LLMs as Text Encoders
|
2306.08302#60
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 60 |
Question1: What is the man in the gray suit on the left looking down to check? You must choose one answer from: phone, tablet, notebook, pager Thought: Check the visual + information about the man in the gray suit on the left. Action region_ground( -@: an image, user provided image, main content is two men in gray suits sitting at a table visual[°]) Observ: region found visual-1: an image, cropped image from visual- ®, main content is man in the gray suit on the left region Question2: Which number birthday is probably being celebrated? Thought: Check the visual > information about the birthday number in the image. Action an image, user provided image, main content is a teddy bear cake on a table text_detect( Figna, Troy, 30,afo Thought: Check the visual â information about the items on the desk in the image. Action svisual[?]} Question3: What item on the desk could help with a cold? You must choose one answer from: cough drops, syringe, pills, herbal tea caption( : an image, user provided image, main @ woman visual[]) content is reading a book in front of a laptop Observ: On the desk in the image, there is a laptop computer âa notebook, and
|
2306.08640#60
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 60 |
13
interface RobotManipulationInterface { /1 âClean up the kitchen counter* rdboard", "banana", "tissue", "bottle", "apple", "can"] func leave_alone(: str) -> None; â{"relocate": "countertop", "cleanup {("relocate": "countertop", "cleanup cycling bin"), ish can"}, func set_designat eptacle: str) -> func relocate(object: str) -> None; inc cleanup(object: str) -> None; gr func cleanup(obj ) > None; oe robot . set designated("recycli robot .set_designated("trash ca func done(); totic iineh â robot .leave_alone("can"); robot = RobotManipulationInterface(); ihesewind
Figure 9: Code as Policies Interface for Real-Robot Execution. We define a simple programmatic interface for specifying robot skill primitives on in an object-oriented fashion. The interface is stateful; for robot primitives such as cleanup() and relocate(), the robot sets a designated receptacle via the special function set designated(). On the right, we provide the actual execution trace produced by the LLM for the Kitchen Cleanup Desk (see Fig. 15).
|
2306.08651#60
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 61 |
# 5.1.1 LLMs as Text Encoders
Pretrain-KGE [94] is a representative method that follows the framework shown in Fig. 14. Given a triple (h, r, t) from KGs, it firsts uses a LLM encoder to encode the textual de- scriptions of entities h, t, and relations r into representations as
eh = LLM(Texth), et = LLM(Textt), er = LLM(Textr), (1)
where eh, er, and et denotes the initial embeddings of enti- ties h, t, and relations r, respectively. Pretrain-KGE uses the BERT as the LLM encoder in experiments. Then, the initial embeddings are fed into a KGE model to generate the final embeddings vh, vr, and vt. During the KGE training phase, they optimize the KGE model by following the standard KGE loss function as
L = [γ + f (vh, vr, vt) â f (vâ² h, vâ² r, vâ² t)], (2)
|
2306.08302#61
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 61 |
@ woman visual[]) content is reading a book in front of a laptop Observ: On the desk in the image, there is a laptop computer âa notebook, and a pen. Question4: What activity + does the cat appear most likely to do? You must choose one answer from: drink, jump, eat, sleep Thought: Check the visual information about the cat and its surroundings. Action: caption( visual-@: an image, user , visual[0]) provided image, main Observ: The cat is sitting on the floor in front of an open oven, staring up at the reflection of itself in the oven's glass door. There is also a bowl and a water bottle nearby. content is a cat sitting on the floor looking at an oven traffic Thought: Check what the man in the gray suit on the left is looking down to check. caption( 5 visual []) Observation: The man is looking down to check his cell phone. Thought: I know the final answer. Final Answer: phone Thought: I found the number 3@ in the image. It could be the birthday number Final Answer: 30 Ground truth [âthirtyâ, â30thâ, âthirtyâ, âthirtyâ, âthirtyâ, â30thâ
|
2306.08640#61
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 61 |
Accuracy (Oracle vIM) 0.82 92 Accuracy (Zero-Shot VLM) 0.91 0.91 0.73 064 0.64 064 0.64 273 o64 0.73 1 No No Active Baseline Ours- Ours- Questions Ll Perception Ll Questions Le Front Ll LLM ME Oracle
Figure 10: Real Robot Benchmark Accuracy. We construct benchmark questions for objects used with the real robot in similar manner to Section 4 in the main paper. Across both types of VLMs, our Ours-LLM beats Baseline Questions by an average of 13.5%, beats No Active Perception by an average of 18%, and beats No Questions by an average of 13.5%.
|
2306.08651#61
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 62 |
L = [γ + f (vh, vr, vt) â f (vâ² h, vâ² r, vâ² t)], (2)
where f is the KGE scoring function, γ is a margin hy- perparameter, and vⲠr, and vⲠt are the negative samples. In this way, the KGE model could learn adequate struc- ture information, while reserving partial knowledge from LLM enabling better knowledge graph embedding. KEPLER [40] offers a unified model for knowledge embedding and pre-trained language representation. This model not only generates effective text-enhanced knowledge embedding
11
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
Mask Entity Prediction fe an MASK] (Neil Armstrong, Bornin, Wapakoneta) Tr A (es) Wapakoneta {ets} fom Neil â on [ser]
Fig. 15. LLMs for joint text and knowledge graph embedding.
|
2306.08302#62
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 62 |
â30thâ, âthirtyâ, âthirtyâ, âthirtyâ, â30thâ âthirtyâ, âthirtyâ, âthirtyâ, âthirtyâ] Thought: of the given options are present in the image. I need to choose the best option based on common sense knowledge. Action: knowledge_reason( » []) Observation: herbal tea Thought: I know the final answer. Final Answer: herbal tea Thought: Determine the most likely activity the cat will do based on the objects around it. Action knowledge_reason( Thought: I know the final answer. Final Answer: drink QuestionS: What are the orange vehicles for? You must choose one answer from: police, shuttle, passengers, air (ae is visual-@: an image, user provided image, main two airplanes on a runway Thought: I need more information about the tow trucks to + determine their purpose. Check the choice one by one Action Thought: Check the visual information about the orange vehicles in the image. Actio âObservat The orange vehicles in the image are tow trucks. Observ: The answers for each question are: no, Yes, the orange vehicles are
|
2306.08640#62
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 62 |
We implement each object-oriented skill â relocate() and cleanup() â for a given object oi and â SE(3); each receptacle rj as a tuple of pick-and-place waypoints defined as (pickoi pick/place point is defined as a transformation relative to the origin of the robotâs reference frame. To execute on a âpickâ waypoint, we plan a collision-free min-jerk trajectory to the given pose, and execute a blocking grasp action; similarly, to execute on a âplaceâ waypoint, we plan a similar trajectory to the given receptacle pose, and execute a blocking open-gripper action. We run all experiments with a 7-DoF Franka Emika Panda robot manipulator equipped with a Robotiq 2F-85 gripper, using Polymetis [55] to facilitate real-time control and Pinocchio [56, 57] for trajectory planning.
|
2306.08651#62
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 63 |
Fig. 15. LLMs for joint text and knowledge graph embedding.
using powerful LLMs but also seamlessly integrates factual knowledge into LLMs. Nayyeri et al. [132] use LLMs to gen- erate the world-level, sentence-level, and document-level representations. They are integrated with graph structure embeddings into a unified vector by Dihedron and Quater- nion representations of 4D hypercomplex numbers. Huang et al. [133] combine LLMs with other vision and graph encoders to learn multi-modal knowledge graph embedding that enhances the performance of downstream tasks. CoDEx [134] presents a novel loss function empowered by LLMs that guides the KGE models in measuring the likelihood of triples by considering the textual information. The proposed loss function is agnostic to model structure that can be incorporated with any KGE model.
# 5.1.2 LLMs for Joint Text and KG Embedding
Instead of using KGE model to consider graph structure, another line of methods directly employs LLMs to incorpo- rate both the graph structure and textual information into the embedding space simultaneously. As shown in Fig. 15, kNN-KGE [136] treats the entities and relations as special tokens in the LLM. During training, it transfers each triple (h, r, t) and corresponding text descriptions into a sentence x as
|
2306.08302#63
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08651
| 63 |
Grounding Language to Skills. While the API definition deterministically defines robot behavior and skill execution in a given environment, we need a way of mapping natural langauge action plans generated by the LLM to sequence of API calls â for example, mapping the language action âdispose of the coffee cupâ to the corresponding API calls robot.set designated(âârecycling binââ); robot.cleanup(ââcoffee cupââ); robot.done(). To do this, we follow a similar procedure as in prior work using LLMs for code generation, prompting an LLM with the API definition, a series of in-context examples, and a continuation prompt for generating a valid sequence of API calls. The continuation prompt contains the set of known objects in the environment and valid receptacles defined for each skill, following prior work [42, 45]. The full prompt is in §F.5.
Evaluation. We add Fig. 10 to supplement our results described in Section 5 of the main paper.
# D VLM Details
We use pretrained visual-and-language models (VLMs) trained on massive internet scale images and texts to answer the questions generated by LLM. Following Appendix B, we prompt the LLM so that it generates queries that can be easily answered by yes, no or unknown; these queries (and the respective images) are the inputs to the VLM.
|
2306.08651#63
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 64 |
x = [CLS] h Texth[SEP] r [SEP] [MASK] Textt[SEP], (3) where the tailed entities are replaced by [MASK]. The sen- tence is fed into a LLM, which then finetunes the model to predict the masked entity, formulated as
PLLM (t|h, r) = P ([MASK]=t|x, Î), (4)
where Î denotes the parameters of the LLM. The LLM is optimized to maximize the probability of the correct entity t. After training, the corresponding token representations in LLMs are used as embeddings for entities and rela- tions. Similarly, LMKE [135] proposes a contrastive learning method to improve the learning of embeddings generated by LLMs for KGE. Meanwhile, to better capture graph structure, LambdaKG [137] samples 1-hop neighbor entities and concatenates their tokens with the triple as a sentence feeding into LLMs.
# 5.2 LLM-augmented KG Completion
Knowledge Graph Completion (KGC) refers to the task of inferring missing facts in a given knowledge graph. Similar to KGE, conventional KGC methods mainly focused on
|
2306.08302#64
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08302
| 65 |
Knowledge Graph Completion (KGC) refers to the task of inferring missing facts in a given knowledge graph. Similar to KGE, conventional KGC methods mainly focused on
the structure of the KG, without considering the exten- sive textual information. However, the recent integration of LLMs enables KGC methods to encode text or generate facts for better KGC performance. These methods fall into two distinct categories based on their utilization styles: 1) LLM as Encoders (PaE), and 2) LLM as Generators (PaG).
# 5.2.1 LLM as Encoders (PaE).
As shown in Fig. 16 (a), (b), and (c), this line of work first uses encoder-only LLMs to encode textual information as well as KG facts. Then, they predict the plausibility of the triples or masked entities by feeding the encoded representation into a prediction head, which could be a simple MLP or conventional KG score function (e.g., TransE [33] and TransR [185]).
Joint Encoding. Since the encoder-only LLMs (e.g., Bert [1]) are well at encoding text sequences, KG-BERT [26] represents a triple (h, r, t) as a text sequence and encodes it with LLM Fig. 16(a).
|
2306.08302#65
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 65 |
Thought: I need to find the segment of Messi's second goal Action: video_narration( visual[>]) Observation: 9-12: he is celebrating with his team mates 12-15: he is kicking the ball into the goal 15-18: he is dribbling the ball Thought: Find the video segment that shows how to grip the racket when striking. Action: subtitle_ground( visual[°]) Observation: The target clip is from 32.16 to 42.8 in subtitle. visual-1: a 10.64 seconds video, dense subtitle, segmented video from visual-@ Talking about "How to grip the racket when strikingâ Thought: I know the final answer. of Messi's second goal. Action: video_ground( » visual[>]) Observation: the target clip is from 12-27 visual-1: a 15 seconds video, no meaningful subtitle, segmented from visual-@, ... Thought: Describe the video clip to provide the answer. Action: video_narration( » Visual[ ]) Observation: @-3: badminton court - how to play badminton 3-6: how to hit a badminton ball 6-9: how to play badminton visual-@: a 146.59 seconds video, no meaningful
|
2306.08640#65
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 65 |
Figure 11: Example of VLM Text Prompt and Image Input.
Specifically, we set the text prompt following Fig. 11. We use InstructBLIP [52] as our VLM and select the output with the highest predicted probability P (answer | prompt,image) for answer â {yes, no, unknown} as the final answer. As InstructBLIP can use multiple LLM backbones, we evaluate both the Vicuna-13B and Flan-T5-XXL (11B) variants, finding Flan-T5-XXL to work better for our tasks. We have also experimented with further finetuning InstructBLIP on the in-domain data from the MESSYSURFACES dataset, but have not seen any noticeable performance gains; as a result, we use the off-the-shelf pretrained models in this work.
# E Personalization Analysis
|
2306.08651#65
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08640
| 66 |
play badminton 3-6: how to hit a badminton ball 6-9: how to play badminton visual-@: a 146.59 seconds video, no meaningful subtitle, user provided video, Every Lionel Messi Goal from the 2022 FIFA World Cup â Thought: Find the segment â Thought: Identify the country Messi scored his second goal against. Action: text_recognize( visual[>]) Observation: 63:27 ARG O O MEX WORLD'S BEST AIRLINE QATAR AIRWAYS QATAR AIRWAYS 16 Vorldcr FIFAR) WORLD'S BEST AIRLINI visual-@: a 279.08 seconds video, > Dense Subtitle, user provided video, Common Beginner Badminton Mistakes - Do And Don'ts 2 Thought: The narration doesn't provide enough information about gripping the racket. I need to watch the video more closely. Action: video_narration(" » visual[ ]) Observation: @-3: Describe how to grip the racket when striking 3-6: how to grip the racket when hitting the ball 6-9: Describe how to grip the racket when striking Question1: Which country was Messi's second goal scored against? Thought: Infer which country was
|
2306.08640#66
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 66 |
# E Personalization Analysis
We explore the hypothesis that incorporating personal preferences on how to clean objects can lead to a higher accuracy on our benchmark, as discussed in Sections 5 and 6 of the main paper. We studied questions that the human Oracle got incorrect in Section 5 of the main paper. Qualitatively, we found that some attributes of an object such as its âdirtinessâ can be subjective, lending support to our hypothesis. This may have caused the Oracle to incorrectly answer some questions. For instance, in Question 6 of Fig. 12, the Oracle did not consider a keyboard that had a small amount of dust on it to be âdirtyâ enough and chose to âleave it as isâ. However, the majority of annotators preferred that the keyboard âshould be cleanedâ.
|
2306.08651#66
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 67 |
where Ï(·) denotes the sigmoid function and e[CLS] de- notes the representation encoded by LLMs. To improve the efficacy of KG-BERT, MTL-KGC [138] proposed a Multi- Task Learning for the KGC framework which incorporates additional auxiliary tasks into the modelâs training, i.e. prediction (RP) and relevance ranking (RR). PKGC [139] assesses the validity of a triplet (h, r, t) by transforming the triple and its supporting information into natural language sentences with pre-defined templates. These sentences are then processed by LLMs for binary classification. The sup- porting information of the triplet is derived from the at- tributes of h and t with a verbalizing function. For instance, if the triple is (Lebron James, member of sports team, Lakers), the information regarding Lebron James is verbalized as âLebron James: American basketball playerâ. LASS [140] observes that language semantics and graph structures are equally vital to KGC. As a result, LASS is proposed to jointly learn two types of embeddings: semantic embedding and structure embedding. In this method, the full text of a triple is
|
2306.08302#67
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 67 |
ball 6-9: Describe how to grip the racket when striking Question1: Which country was Messi's second goal scored against? Thought: Infer which country was Messiâs second goal scored against. Action: knowledge_reason( » {)) Observation: Mexico Thought: I know the final answer. Final Answer: Mexico Question2: How to grip the racket when striking? And show me a video clip to illustrate it. Thought: The video narration still doesn't provide enough information about gripping the racket. I will try to find more information from the subtitles. Action: subtitle_reason( visual[|]) Observation: When striking, grip the racket with a forehand grip, which allows you to rotate the forearm and generate more power in your shots. This grip also enables you to get into a side-on position and bring your elbow back, giving you more control and deception with your overhead shots. | Final Answer: When striking, grip the racket with a forehand grip, which allows you to rotate the forearm and generate more power in your shots. This grip also enables you to get into a side-on position and bring your elbow back, giving you more control and deception with your overhead shots. To see a video clip
|
2306.08640#67
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 67 |
We explored whether adding preferences would improve our frameworkâs accuracy. We selected 9 questions where both the Oracle and our framework, LLM-Ours, chose the same incorrect answer. The full list of questions is shown in Fig. 12 and Fig. 13. We recruited a participant and, for each question, asked them whether the Oracle could have chosen the incorrect answer because of a lack of preference information. If the participant agreed that there was as lack of preference information, we asked them what the preference would be. For instance, in Question 6 of Fig. 12, the user noted that the disagreement between the human Oracle and human annotators could have been due to a lack of preference information, such as âItâs not acceptable for objects to have any signs of dirtinessâ. The participant indicated that the Oracle could have incorrectly answered 8 out of the 9 questions due to a lack of preference information. Question 9 in Fig. 13 is an example of a question where the user thought the Oracle was incorrect due to noise.
|
2306.08651#67
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 68 |
LASS is proposed to jointly learn two types of embeddings: semantic embedding and structure embedding. In this method, the full text of a triple is forwarded to the LLM, and the mean pooling of the corresponding LLM outputs for h, r, and t are separately calculated. These embeddings are then passed to a graph- based method, i.e. TransE, to reconstruct the KG structures. MLM Encoding. Instead of encoding the full text of a triple, many works introduce the concept of Masked Lan- guage Model (MLM) to encode KG text (Fig. 16(b)). MEM- KGC [141] uses Masked Entity Model (MEM) classification mechanism to predict the masked entities of the triple. The input text is in the form of
|
2306.08302#68
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08651
| 68 |
For the remaining 8 questions, our goal was to see if adding preferences to the LLMâs prompt would help the LLM choose the âcorrectâ action plan as indicated by the annotatorsâ majority label. We used 1 question to tune the prompt and evaluated the LLM on the remaining 7 questions (Questions 2 â 8 in Fig. 12 and Fig. 13). We prompted the LLM by appending preference information to the prompt for choosing an action plan (described in §F.3). An example prompt is shown in Listing 2:
1 """ 2 The owner of the object has a preference on how you should tidy the â candle â: Don ât trim the wick . It doesn ât matter whether the burnt part of the candle wick is excessively long because I can still light it . 3 4 The best option is : 5 """
# Listing 2: Example Prompt for Generation an Action Plan with Preference Information
We found an average 86% improvement in accuracy, lending support to the hypothesis that preference information helps further enable grounded social reasoning.
15
|
2306.08651#68
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 69 |
x = [CLS] Texth [SEP] Textr [SEP] [MASK] [SEP], (7)
Similar to Eq. 4, it tries to maximize the probability that the masked entity is the correct entity t. Additionally, to enable the model to learn unseen entities, MEM-KGC integrates
12
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
Triple: (h, r,t) Text Sequence: [CLS] Text, [SEP] Text, [SEP] Text, [SEP] MLP ot [CLS]}Text, [SEP] Text, [SEP] Text; [SEP] (a) Joint Encoding [CLS] Text), [SEP] Text, [SEP]|[MASK] [SEP] (b) MLM Encoding Score Function [CLS}|Text, [SEP] Text, [SEP] [CLS]}Text, [SEP] (c) Separated Encoding
Fig. 16. The general framework of adopting LLMs as encoders (PaE) for KG Completion.
multitask learning for entities and super-class prediction based on the text description of entities:
# x = [CLS] [MASK] [SEP] Texth [SEP].
|
2306.08302#69
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08651
| 69 |
Question 1: âtabletâ (used as âtraining exampleâ to tune prompt) Oracle/LLM Answer __] Detach accessories, put in sleep mode, and place tablet and accessories in a designated area. Annotator Majority Ensure the tablet in sleep mode is in a designated area. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference I prefer to keep my tablet accessories attached so I can continue charging them. Question 2: âcup* Oracle/LLM Answer | Wash and dry the cup with residue or stains. Annotator Majority Leave the empty, clean cup as is. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Leave cups that donât appear visibly dirty. Question 3: âcontrollerâ Oracle/LLM Answer __| Clean the controller with a soft cloth or cleaning solution, thenâ place it in a designated area. Annotator Majority Leave the controller as is on the stand or designated area. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Itâs acceptable to have some dust on objects. Question 4: âkeyboardâ Oracle/LLM Answer __| Leave the
|
2306.08651#69
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 70 |
multitask learning for entities and super-class prediction based on the text description of entities:
# x = [CLS] [MASK] [SEP] Texth [SEP].
OpenWorld KGC [142] expands the MEM-KGC model to address the challenges of open-world KGC with a pipeline framework, where two sequential MLM-based modules are defined: Entity Description Prediction (EDP), an auxiliary module that predicts a corresponding entity with a given textual description; Incomplete Triple Prediction (ITP), the target module that predicts a plausible entity for a given incomplete triple (h, r, ?). EDP first encodes the triple with Eq. 8 and generates the final hidden state, which is then forwarded into ITP as an embedding of the head entity in Eq. 7 to predict target entities.
Separated Encoding. As shown in Fig. 16(c), these meth- ods involve partitioning a triple (h, r, t) into two distinct parts, i.e. (h, r) and t, which can be expressed as
x(h,r) = [CLS] Texth [SEP] Textr [SEP],
xt = [CLS] Textt [SEP]. (10)
|
2306.08302#70
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 70 |
x visual-@: a 917.51 seconds video, Dense subtitle, user provided video, GoPro Hero 8 Black Best Tips and Tricks! visual-1: a 1783.08 seconds video, Dense subtitle, user provided video, GoPro Hero 10 Beginners Guide & Tutorial _ Getting Started Thought: First, I need to -â» Thought: The text understand what the recognition tool provided setting in the image is. the setting information, I will use the which is âLinear + Horizon text_detect tool to read Lock, Full 36@ rotation.â any visible settings or It appears to be a setting text in the image. or a feature of a device, Action: possibly the smartwatch in the image. .. I'll use the subtitle_ground tool on both videos to find a text_detect( visual[-]) Observation: segment that mentions this Linear + Horizon Lock, Full 360 setting. Action: rotation. Thought: I need to watch the video and narrate the content. subtitle_ground » visual[>]) Observation: Warning: Relevant clip didnât found. > Thought: The first video doesn't contain the required information. I'll now try the same action with the second video. Action: subtitle_ground » visual[']) Observation: the target clip is from
|
2306.08640#70
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 70 |
Yes Lack of Preferences? Missing Preference Itâs acceptable to have some dust on objects. Question 4: âkeyboardâ Oracle/LLM Answer __| Leave the properly placed and connected keyboard as is. âAnnotator Majority Clean the dirty or dusty keyboard and place it in a convenient Answer (Correct Label) | location. Disagreement Due to | Yes Lack of Preferences? Missing Preference Its not acceptable for objects to have any signs of dirtiness. Question 5: âmouseâ Oracle/LLM Answer __| Leave the properly placed and connected mouse as is. Annotator Majority Clean the dirty or dusty mouse and place it in a convenient Answer (Correct Label) | location. Disagreement Due to | Yes Lack of Preferences? Missing Preference Itâs not acceptable for objects to have any signs of dirtiness. Question 6: âkeyboard Oracle/LLM Answer __| Leave keyboard as is, ensuring it is placed neatly. âAnnotator Majority Clean the dirty keyboard and place it neatly. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Its not acceptable for objects to have any signs of dirtiness.
|
2306.08651#70
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 71 |
x(h,r) = [CLS] Texth [SEP] Textr [SEP],
xt = [CLS] Textt [SEP]. (10)
Then the two parts are encoded separately by LLMs, and the final hidden states of the [CLS] tokens are used as the rep- resentations of (h, r) and t, respectively. The representations are then fed into a scoring function to predict the possibility of the triple, formulated as
s = fscore(e(h,r), et), (11)
where fscore denotes the score function like TransE.
StAR [143] applies Siamese-style textual encoders on their text, encoding them into separate contextualized rep- resentations. To avoid the combinatorial explosion of textual encoding approaches, e.g., KG-BERT, StAR employs a scor- ing module that involves both deterministic classifier and spatial measurement for representation and structure learn- ing respectively, which also enhances structured knowledge by exploring the spatial characteristics. SimKGC [144] is
(8)
(9)
|
2306.08302#71
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 71 |
first video doesn't contain the required information. I'll now try the same action with the second video. Action: subtitle_ground » visual[']) Observation: the target clip is from 18:02 - 18:44 visual-3: a 42 seconds video, segmented video from visual-1, dense subtitle, target clip for query "Linear + Horizon Lock, Full 360 rotation" Every Time PT J Action: video_narration( » visual[@]) Observation: 0-3: asian beef stir fry. Texts on the frame (Taste) 3-6: a chef is preparing a beef dish Texts on the frame (Beef 350 g Taste) 6-9: a person is cutting a piece of meat. Texts on the frame (Taste) 9-12: a person is cutting a piece of meat. Texts on the frame (e Taste) f . QuestionS: List all t : / PT ) Thought: I need to find the ingredients and their weights in the video. Action: video_narration( » Visual[@]) Observation: 0-3: asian beef stir fry. Texts on the frame (Taste) ... 84-87: garlic. Texts on the frame (Garlic 15 g Taste) ... 132-135: 1 cup of broccoli
|
2306.08640#71
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 71 |
Oracle/LLM Answer __] Detach accessories, put in sleep mode, and place tablet and accessories in a designated area. Annotator Majority Ensure the tablet in sleep mode is in a designated area. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference I prefer to keep my tablet accessories attached so I can continue charging them.
Oracle/LLM Answer | Wash and dry the cup with residue or stains. Annotator Majority Leave the empty, clean cup as is. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Leave cups that donât appear visibly dirty.
Oracle/LLM Answer __| Clean the controller with a soft cloth or cleaning solution, thenâ place it in a designated area. Annotator Majority Leave the controller as is on the stand or designated area. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Itâs acceptable to have some dust on objects.
Figure 12: Questions Used for Personalization Analysis (1/2). We display questions where both Oracle and Ours-LLM chose the same incorrect answer. We recruited a participant to indicate whether the Oracle could have incorrectly answered these questions due to a lack of preference information, and if so, what the preference would be.
16
|
2306.08651#71
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 72 |
(8)
(9)
Query Triple: (h, r, ?) Text Sequence: [CLS] Text; [SEP] Text, [SEP] [SEP] Text, [SEP] [SEP] Text), [SEP] Text, [SEP] (a) Encoder-Decoder PaG [SEP] Text, [SEP] (ae 603) [SEP] Text), [SEP] Text, [SEP]
(a) Decoder-Only PaG
Fig. 17. The general framework of adopting LLMs as decoders (PaG) for KG Completion. The En. and De. denote the encoder and decoder, respectively.
another instance of leveraging a Siamese textual encoder to encode textual representations. Following the encoding process, SimKGC applies contrastive learning techniques to these representations. This process involves computing the similarity between the encoded representations of a given triple and its positive and negative samples. In particular, the similarity between the encoded representation of the triple and the positive sample is maximized, while the sim- ilarity between the encoded representation of the triple and the negative sample is minimized. This enables SimKGC to learn a representation space that separates plausible and implausible triples. To avoid overfitting textural in- formation, CSPromp-KG [186] employs parameter-efficient prompt learning for KGC.
|
2306.08302#72
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 72 |
Texts on the frame (Taste) ... 84-87: garlic. Texts on the frame (Garlic 15 g Taste) ... 132-135: 1 cup of broccoli florets. Texts on the frame (Taste) ... > Question4: How to Cook Perfect Beef Stir Fry? visual-2: an image, user provided image, main content is a close up of a person holding a smart watch. â» Thought: The subtitle_ground tool found a clip from the second video that mentions "Linear + Horizon Lock, Full 360 rotation." This video segment should illustrate what these settings mean. Final Answer: (The meaning of the setting âLinear + Horizon Lock, Full 360 rotationâ is explained in the extracted video clip, visual[3]) visual-@: a 197.12 seconds video, no meaningful subtitle, user provided video, How to Cook Perfect Beef Stir Fry Thought: I have the narration of the video, now I can summarize the steps to cook perfect beef stir fry. Final Answer: To 1. Slice 350g of beef. cook perfect beef stir fry, follow these steps: 2. Cut 100g of carrots and 200g of broccoli. 3. Mince 15g of garlic. Heat a wok with cold oil
|
2306.08640#72
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 72 |
16
Question 7: *lamp* Oracle/LLM Answer Turn on the lamp if needed. Annotator Majority Set the turned-off lamp upright. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Its better to keep the light off! Question 8: âcandle Oracle/LLM Answer âTrim the burnt wick and place the used candle in a designated area. Annotator Majority Leave the clean, unlit candle as is and place it in a designated area. Answer (Correct Label) Disagreement Due to _ | Yes Lack of Preferences? Missing Preference It doesnât matter whether the burnt part of the wick is excessively long because I can still light it. Question 9: âpapersâ (omitted) Oracle/LLM Answer _] Flatten crumpled papers or trim torn edges, organize, and store them in a designated area. Annotator Majority Gather, organize, and store the scattered papers in a designated Answer (Correct Label) | area. Disagreement Due to _ | No, the Oracle/LLM answer is incorrect Lack of Preferences? Missing Preference
Oracle/LLM Answer Turn on the lamp if needed. Annotator Majority Set the turned-off lamp upright. Answer (Correct Label) Disagreement Due to | Yes Lack of Preferences? Missing Preference Its better to keep the light off!
|
2306.08651#72
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 73 |
LP-BERT [145] is a hybrid KGC method that combines both MLM Encoding and Separated Encoding. This ap- proach consists of two stages, namely pre-training and fine-tuning. During pre-training, the method utilizes the standard MLM mechanism to pre-train a LLM with KGC data. During the fine-tuning stage, the LLM encodes both parts and is optimized using a contrastive learning strategy (similar to SimKGC [144]).
# 5.2.2 LLM as Generators (PaG).
Recent works use LLMs as sequence-to-sequence generators in KGC. As presented in Fig. 17 (a) and (b), these approaches involve encoder-decoder or decoder-only LLMs. The LLMs receive a sequence text input of the query triple (h, r, ?), and generate the text of tail entity t directly.
|
2306.08302#73
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 73 |
fry, follow these steps: 2. Cut 100g of carrots and 200g of broccoli. 3. Mince 15g of garlic. Heat a wok with cold oil to prevent sticking. 5. Cook the beef in the wok. Add the vegetables and cook them. Add 1 tsp of soy sauce and some water. 8. Mix water and cornstarch, then add it to the wok. 9. Add some lemon juice. 10. Serve the stir fry over cooked brown rice. he ingredients for the dish and their weights. Thought: I have found the ingredients and their weights in the video narration. Final Answer: The ingredients and their weights for the beef stir fry are: Beef 350g, Soy Sauce 1 1/2 tsp, Carrot 100g, Broccoli 200g, Garlic 15g, and Cooked Brown Rice.
|
2306.08640#73
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 73 |
Oracle/LLM Answer âTrim the burnt wick and place the used candle in a designated area. Annotator Majority Leave the clean, unlit candle as is and place it in a designated area. Answer (Correct Label) Disagreement Due to _ | Yes Lack of Preferences? Missing Preference It doesnât matter whether the burnt part of the wick is excessively long because I can still light it.
Oracle/LLM Answer _] Flatten crumpled papers or trim torn edges, organize, and store them in a designated area. Annotator Majority Gather, organize, and store the scattered papers in a designated Answer (Correct Label) | area. Disagreement Due to _ | No, the Oracle/LLM answer is incorrect Lack of Preferences? Missing Preference
Figure 13: Questions Used for Personalization Analysis (2/2). We display questions where both Oracle and Ours-LLM chose the same incorrect answer. We recruited a participant to indicate whether the Oracle could have incorrectly answered these questions due to a lack of preference information, and if so, what the preference would be.
17
# F Prompts & In-Context Examples for LLM Inference
|
2306.08651#73
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 74 |
GenKGC [96] uses the large language model BART [5] as the backbone model. Inspired by the in-context learning approach used in GPT-3 [59], where the model concatenates relevant samples to learn correct output answers, GenKGC proposes a relation-guided demonstration technique that includes triples with the same relation to facilitating the modelâs learning process. In addition, during generation, an entity-aware hierarchical decoding method is proposed to reduce the time complexity. KGT5 [146] introduces a
13
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
(Charlie's Angels: Full Throttle} a iven head entity and relation, predict the tai ntity from the candidates: [ 100 candidates ] Head: Charlie's Angels Relation: genre of Tail: Comedy-GB
Fig. 18. The framework of prompt-based PaG for KG Completion.
|
2306.08302#74
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 74 |
# Figure 7: The reasoning process of AssistGPT when handling in-the-wild questions.
16
# References
[1] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[2] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, page 9, 2019.
[3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, pages 1877â1901, 2020.
|
2306.08640#74
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 74 |
17
# F Prompts & In-Context Examples for LLM Inference
In this section, we provide the comprehensive set of prompts used to elicit the desired behavior from the LLM (via the OpenAI API) across the multiple functionalities described in our approach, from generating follow-up questions, to synthesizing code for real-robot execution.
# F.1 Prompt for Generating Follow-Up Questions
In the second step of our proposed framework (see Section 3 of the main paper), we one-shot prompt the LLM to generate follow-up questions about a list of objects on a surface using the prompt in Listing 3.
1 """ 2 These are the objects on the desk : 3 â scrunchie â , â lotion â , â vaseline â , â brush â. 4 5 Your goal is to tidy the desk in a socially appropriate manner . 6 Ask a new follow - up question about each object to gather 7 more information . Only ask questions that can be answered by 8 taking a picture of the object . For example , DO NOT ask whether 9 the object is currently being used . 10 """
Listing 3: Instruction For Generating Follow-Up Questions
To guide follow-up question generation, we provide the following (Listing 4) as the sole in-context example before having the LLM generate a continuation:
|
2306.08651#74
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 75 |
novel KGC model that fulfils four key requirements of such models: scalability, quality, versatility, and simplicity. To address these objectives, the proposed model employs a straightforward T5 small architecture. The model is distinct from previous KGC methods, in which it is randomly ini- tialized rather than using pre-trained models. KG-S2S [147] is a comprehensive framework that can be applied to var- ious types of KGC tasks, including Static KGC, Temporal KGC, and Few-shot KGC. To achieve this objective, KG-S2S reformulates the standard triple KG fact by introducing an additional element, forming a quadruple (h, r, t, m), where m represents the additional âconditionâ element. Although different KGC tasks may refer to different conditions, they typically have a similar textual format, which enables uni- fication across different KGC tasks. The KG-S2S approach incorporates various techniques such as entity description, soft prompt, and Seq2Seq Dropout to improve the modelâs performance. In addition, it utilizes constrained decoding to ensure the generated entities are valid. For closed-source
|
2306.08302#75
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 75 |
[4] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efï¬cient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[5] OpenAI. Introducing chatgpt. OpenAI Blog, 09 2021.
[6] Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. So- cratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022.
[7] Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, et al. Language models with image descriptors are strong few-shot video-language learners. arXiv preprint arXiv:2205.10747, 2022.
|
2306.08640#75
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 75 |
1 """ 2 These are the objects on the desk : 3 â apple â , â charging cable â , â empty water bottle â , â book â , â calendar â , â coffee cup â. 4 5 6 Your goal is to tidy the desk in a socially appropriate manner . 7 Ask a new follow - up question about each object to gather 8 more information . Only ask questions that can be answered by 9 taking a picture of the object . For example , DO NOT ask 10 whether the object is currently being used . 11 12 -â Apple â: 13 Socially motivated reasoning : You should throw away the 14 â apple â if it is partially eaten , but not if it is intact . 15 16 Resulting question ( that can be answered by taking a 17 picture of object ) : Is the â apple â partially eaten ? 18 19 ( a ) Yes ( b ) No ( c ) Cannot answer from image 20 21 -â Charging cable â: 22 Socially motivated reasoning : You should coil the 23 24 25 â charging cable â and store it neatly if it is not in use , but leave it in place if it is connected to a device that needs charging . 26 27 28 Resulting question ( that can be answered by
|
2306.08651#75
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 76 |
Dropout to improve the modelâs performance. In addition, it utilizes constrained decoding to ensure the generated entities are valid. For closed-source LLMs (e.g., ChatGPT and GPT-4), AutoKG adopts prompt engineering to design customized prompts [93]. As shown in Fig. 18, these prompts contain the task description, few- shot examples, and test input, which instruct LLMs to predict the tail entity for KG completion.
|
2306.08302#76
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 76 |
[8] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023.
[9] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for uniï¬ed vision-language understanding and generation. In ICML, pages 12888â 12900, 2022.
[10] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
[11] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Instructblip: Towards general-purpose Wang, Boyang Li, Pascale Fung, and Steven Hoi. vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
|
2306.08640#76
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08302
| 77 |
Comparison between PaE and PaG. LLMs as Encoders (PaE) applies an additional prediction head on the top of the representation encoded by LLMs. Therefore, the PaE framework is much easier to finetune since we can only optimize the prediction heads and freeze the LLMs. More- over, the output of the prediction can be easily specified and integrated with existing KGC functions for different KGC tasks. However, during the inference stage, the PaE requires to compute a score for every candidate in KGs, which could be computationally expensive. Besides, they cannot generalize to unseen entities. Furthermore, the PaE requires the representation output of the LLMs, whereas some state-of-the-art LLMs (e.g. GPT-41) are closed sources and do not grant access to the representation output.
LLMs as Generators (PaG), on the other hand, which does not need the prediction head, can be used without finetuning or access to representations. Therefore, the frame- work of PaG is suitable for all kinds of LLMs. In addition, PaG directly generates the tail entity, making it efficient
|
2306.08302#77
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 77 |
[12] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pages 213â229, 2020.
[13] Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10965â10975, June 2022.
[14] Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280, 2022.
|
2306.08640#77
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 77 |
Listing 4: In-Context Example For Generating Follow-Up Questions
Notably, we use Chain-of-Thought prompting to encourage the LLM to generate questions that are motivated by social reasoning. We also encourage the LLM to ask questions that can be answered by an image of the object.
18
# Prompt for Generating Baseline Follow-Up Questions.
To generate baseline questions, we use the following prompt (Listing 5):
1 """ 2 Ask one yes - or - no question for each object on the desk . Only ask 3 yes - or - no questions that can be answered by taking a picture of the object . 4 5 These are the objects on the desk : 6 â scrunchie â , â lotion â , â vaseline â , â brush â. 7 8 Format your answer in the following format : â object_name â: question 9 """
# Listing 5: Instruction For Generating Baseline Follow-Up Questions
In our baseline question prompt, we do not specify that the goal for the LLM is to tidy the desk nor do we require the LLM to generate socially motivated questions.
# F.2 Prompt for Choosing a Close-Up Angle
|
2306.08651#77
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 78 |
in inference without ranking all the candidates and easily generalizing to unseen entities. But, the challenge of PaG is that the generated entities could be diverse and not lie in KGs. What is more, the time of a single inference is longer due to the auto-regressive generation. Last, how to design a powerful prompt that feeds KGs into LLMs is still an open question. Consequently, while PaG has demonstrated promising results for KGC tasks, the trade-off between model complexity and computational efficiency must be carefully considered when selecting an appropriate LLM- based KGC framework.
# 5.2.3 Model Analysis
Justin et al. [187] provide a comprehensive analysis of KGC methods integrated with LLMs. Their research investigates the quality of LLM embeddings and finds that they are suboptimal for effective entity ranking. In response, they propose several techniques for processing embeddings to improve their suitability for candidate retrieval. The study also compares different model selection dimensions, such as Embedding Extraction, Query Entity Extraction, and Lan- guage Model Selection. Lastly, the authors propose a frame- work that effectively adapts LLM for knowledge graph completion.
# 5.3 LLM-augmented KG Construction
|
2306.08302#78
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 78 |
[15] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022.
17
[16] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, and Radu Soricut. Pali: A jointly-scaled multilingual language-image model, 2022.
|
2306.08640#78
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 78 |
# F.2 Prompt for Choosing a Close-Up Angle
In the third step of our proposed framework, we few-shot prompt the LLM to generate informative close-up angles that would guide a robot. In the prompt, we include a list of objects on the current surface, the follow-up question about an object, and a multiple choice question with options corresponding to the five predefined close-up angles: <FRONT>, <BACK>, <LEFT>, <RIGHT>, <TOP>. We use the following prompt (Listing 6):
|
2306.08651#78
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 79 |
# 5.3 LLM-augmented KG Construction
Knowledge graph construction involves creating a struc- tured representation of knowledge within a specific domain. This includes identifying entities and their relationships with each other. The process of knowledge graph construc- tion typically involves multiple stages, including 1) entity discovery, 2) coreference resolution, and 3) relation extraction. Fig 19 presents the general framework of applying LLMs for each stage in KG construction. More recent approaches have explored 4) end-to-end knowledge graph construction, which involves constructing a complete knowledge graph in one step or directly 5) distilling knowledge graphs from LLMs.
# 5.3.1 Entity Discovery
Entity discovery in KG construction refers to the process of identifying and extracting entities from unstructured data sources, such as text documents, web pages, or social me- dia posts, and incorporating them to construct knowledge graphs.
|
2306.08302#79
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 79 |
[17] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023.
[18] Yingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, and Yongfeng Zhang. Openagi: When llm meets domain experts. arXiv preprint arXiv:2304.04370, 2023.
[19] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023.
[20] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023.
|
2306.08640#79
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 79 |
1 """ 2 Description : These are the objects on the desk : 3 â computer monitor â , âcup â , â computer wires â , â apple â. 4 5 Follow - up Question : Are the â computer wires â connected to anything ? 6 ( a ) Yes ( b ) No 7 8 You are instructing a robot to take a close - up picture of the object 9 to help answer the follow - up question . 10 11 Which of the following angles would yield a close - up picture that can 12 best answer the question ? 13 14 ( a ) Top of the object 15 ( b ) Right side of the object 16 ( c ) Left side of the object 17 ( d ) Front of the object 18 ( e ) Behind the object 19 20 Response : A top - down view would give an unoccluded view since the 21 wires might be tangled . 22 23 ( a ) Top of the object 24 25 Description : These are the objects on the desk : â monitor â , â stack of papers â , â cups â. 26 27 28 Follow - up Question : Are the â cups â empty ? 29 ( a ) Yes ( b ) No 30 31 You are instructing a robot to take a close - up picture of the object 32 to help answer the follow - up question . 33 34 Which of the
|
2306.08651#79
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 80 |
Named Entity Recognition (NER) involves identifying and tagging named entities in text data with their positions and classifications. The named entities include people, or- ganizations, locations, and other types of entities. The state- of-the-art NER methods usually employ LLMs to leverage their contextual understanding and linguistic knowledge for accurate entity recognition and classification. There are three NER sub-tasks based on the types of NER spans identified, i.e., flat NER, nested NER, and discontinuous NER. 1) Flat NER is to identify non-overlapping named entities from input text. It is usually conceptualized as a sequence labelling problem where each token in the text is assigned a unique label based on its position in the sequence [1], [148], [188], [189]. 2) Nested NER considers complex scenarios which allow a token to belong to multiple entities. The span- based method [190]â[194] is a popular branch of nested
14
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
|
2306.08302#80
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 80 |
[21] DÃdac SurÃs, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023.
[22] Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without training. arXiv preprint arXiv:2211.11559, 2022.
[23] Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, and Nan Duan. Taskmatrix.ai: Completing tasks by connecting foundation models with millions of apis. arXiv preprint arXiv:2303.16434, 2023.
[24] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022.
|
2306.08640#80
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08302
| 81 |
14
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
Knowledge Graph Pennsylvania IsA , United States 4S LLM-based Knowledge Graph Construction Pry SS politician a, state e Biden|was born in|Pennsylvania.|He| serves as the 46th President of Nor the [United States| country Entity i) entity Sty Ea, Coreference Resolution Named Entity Recognition Relation Extraction Text: Joe Biden was born in Pennsylvania. He serves as the 46th President of the United States.
Fig. 19. The general framework of LLM-based KG construction.
NER which involves enumerating all candidate spans and classifying them into entity types (including a non-entity type). Parsing-based methods [195]â[197] reveal similarities between nested NER and constituency parsing tasks (pre- dicting nested and non-overlapping spans), and propose to integrate the insights of constituency parsing into nested NER. 3) Discontinuous NER identifies named entities that may not be contiguous in the text. To address this challenge, [198] uses the LLM output to identify entity fragments and deter- mine whether they are overlapped or in succession.
|
2306.08302#81
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 81 |
[25] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
[26] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence In Proceedings of the IEEE Zitnick, and Devi Parikh. Vqa: Visual question answering. international conference on computer vision, pages 2425â2433, 2015.
[27] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077â6086, 2018.
[28] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019.
|
2306.08640#81
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 81 |
19
38 ( b ) Right side of the object 39 ( c ) Left side of the object 40 ( d ) Front of the object 41 ( e ) Behind the object 42 43 Response : The cups might be opaque so the best angle would be 44 45 ( a ) Top of the object 46 47 Description : These are the objects on the desk : 48 â keyboard â , â whiteboard marker â , â stack of papers â , â vase â. 49 50 Follow - up Question : Are the â stack of papers â straightened ? 51 ( a ) Yes ( b ) No 52 53 You are instructing a robot to take a close - up picture of the object 54 to help answer the follow - up question . 55 56 Which of the following angles would yield a close - up picture that can 57 best answer the question ? 58 59 ( a ) Top of the object 60 ( b ) Right side of the object 61 ( c ) Left side of the object 62 ( d ) Front of the object 63 ( e ) Behind the object 64 65 Response : The stack would best be viewed from its side . 66 67 ( d ) Front of the object 68 """
Listing 6: Prompt for Generating Informative Close-Up Angles
# F.3 Prompt for Choosing an Action Plan
|
2306.08651#81
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08640
| 82 |
[29] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019.
[30] Difei Gao, Ruiping Wang, Shiguang Shan, and Xilin Chen. Learning to recognize visual concepts for visual question answering with structural label space. IEEE Journal of Selected Topics in Signal Processing, 14(3):494â505, 2020.
[31] Difei Gao, Ruiping Wang, Shiguang Shan, and Xilin Chen. Visual textbook network: Watch carefully before answering visual questions. In BMVC, 2017.
18
[32] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195â3204, 2019.
[33] Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. arXiv, 2022.
|
2306.08640#82
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 82 |
Listing 6: Prompt for Generating Informative Close-Up Angles
# F.3 Prompt for Choosing an Action Plan
As the ultimate step of our framework, we prompt the LLM to answer our benchmark questions by choosing the most socially appropriate action to tidy each object. When prompting the LLM, we first include the context accumulated so far: the follow-up questions and their VLM-generated answers (see Listing 7 for an example).
1 """ 2 Here is some information about the â scrunchie â in 3 question - answer format . 4 5 Is the â scrunchie â neatly placed on the desk ? Yes 6 Does the â scrunchie â have any stains ? Yes 7 Does the â scrunchie â have any loose threads ? No 8 """
# Listing 7: Example of Context for Action Plan Generation
We append the benchmark question to the prompt and have the LLM generate an appropriate tidying action:
|
2306.08651#82
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 83 |
Entity Typing (ET) aims to provide fine-grained and ultra-grained type information for a given entity men- tioned in context. These methods usually utilize LLM to encode mentions, context and types. LDET [150] applies pre- trained ELMo embeddings [148] for word representation and adopts LSTM as its sentence and mention encoders. BOX4Types [151] recognizes the importance of type depen- dency and uses BERT to represent the hidden vector and each type in a hyperrectangular (box) space. LRN [199] considers extrinsic and intrinsic dependencies between la- bels. It encodes the context and entity with BERT and employs these output embeddings to conduct deductive and inductive reasoning. MLMET [200] uses predefined patterns to construct input samples for the BERT MLM and employs [MASK] to predict context-dependent hypernyms of the mention, which can be viewed as type labels. PL [201] and DFET [202] utilize prompt learning for entity typing. LITE [203] formulates entity typing as textual inference and uses RoBERTa-large-MNLI as the backbone network.
|
2306.08302#83
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 83 |
[34] Peng Wang, Qi Wu, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. Fvqa: Fact-based visual question answering. IEEE transactions on pattern analysis and machine intelligence, 40(10):2413â2427, 2017.
[35] Liangke Gui, Borui Wang, Qiuyuan Huang, Alex Hauptmann, Yonatan Bisk, and Jianfeng Gao. Kat: A knowledge augmented transformer for vision-and-language. arXiv preprint arXiv:2112.08614, 2021.
[36] Kenneth Marino, Xinlei Chen, Devi Parikh, Abhinav Gupta, and Marcus Rohrbach. Krisp: Integrating implicit and symbolic knowledge for open-domain knowledge-based vqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14111â14121, 2021.
[37] Difei Gao, Ruiping Wang, Shiguang Shan, and Xilin Chen. Cric: A vqa dataset for compositional reasoning on vision and commonsense. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
|
2306.08640#83
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 83 |
# Listing 7: Example of Context for Action Plan Generation
We append the benchmark question to the prompt and have the LLM generate an appropriate tidying action:
1 """ 2 Based on the information above , what is the most appropriate 3 way to tidy the â scrunchie â? 4 5 Choose the best option . 6 ( a ) The scrunchie is neatly coiled and placed on the desk . 7 8 ( b ) The scrunchie is stretched out and tangled with other 9 -> Leave the neatly coiled scrunchie as is in a designated area . items on the desk . -> Untangle , coil neatly , and place in a designated area . 10 11 ( c ) The scrunchie is dirty or stained and needs to be cleaned .
20
-> Clean , dry , and place in a designated area . 12 13 ( d ) The scrunchie is partially unraveled or damaged . 14 15 ( e ) The scrunchie is being used to hold together a bundle 16 -> Repair or replace , and place in a designated area . of cables or cords on the desk . -> Remove from cables , coil neatly , and place in a designated area . 17 18 19 The best option is : 20 """
Listing 8: Prompt For Generating Answers to Benchmark Questions
# F.4 Prompt for Generating MESSYSURFACES Benchmark Questions
|
2306.08651#83
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 84 |
Entity Linking (EL), as known as entity disambiguation, involves linking entity mentions appearing in the text to their corresponding entities in a knowledge graph. [204] proposed BERT-based end-to-end EL systems that jointly discover and link entities. ELQ [152] employs a fast bi- encoder architecture to jointly perform mention detection
and linking in one pass for downstream question answering systems. Unlike previous models that frame EL as matching in vector space, GENRE [205] formulates it as a sequence-to- sequence problem, autoregressively generating a version of the input markup-annotated with the unique identifiers of an entity expressed in natural language. GENRE is extended to its multilingual version mGENRE [206]. Considering the efficiency challenges of generative EL approaches, [207] par- allelizes autoregressive linking across all potential mentions and relies on a shallow and efficient decoder. ReFinED [153] proposes an efficient zero-shot-capable EL approach by taking advantage of fine-grained entity types and entity descriptions which are processed by a LLM-based encoder.
# 5.3.2 Coreference Resolution (CR)
Coreference resolution is to find all expressions (i.e., men- tions) that refer to the same entity or event in a text.
|
2306.08302#84
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 84 |
[38] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1728â1738, 2021.
[39] Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, and Jingjing Liu. Less is more: Clipbert for video-and-language learning via sparse sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7331â7341, June 2021.
[40] Yuxuan Wang, Difei Gao, Licheng Yu, Weixian Lei, Matt Feiszli, and Mike Zheng Shou. In Geb+: A benchmark for generic event boundary captioning, grounding and retrieval. Computer VisionâECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23â27, 2022, Proceedings, Part XXXV, pages 709â725. Springer, 2022.
|
2306.08640#84
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 84 |
Listing 8: Prompt For Generating Answers to Benchmark Questions
# F.4 Prompt for Generating MESSYSURFACES Benchmark Questions
As described in Section 3 of the main paper, we prompt an LLM to generate multiple choice options for the question âWhat is the most appropriate way to tidy the object?â for each object in the MESSYSURFACES dataset. To generate each set of multiple choice options, we first prompt the LLM to list five possible states each object could be in:
1 """ 2 These are the objects on the desk : 3 â scrunchie â , â lotion â , â vaseline â , â brush â. 4 5 Your goal is to tidy each â object â up , but there is not 6 enough information about each object . For each â object â , 7 list 5 possible states the object could be in that would 8 affect how you tidy it up . 9 10 Label the 5 states ( a ) -( e ) . Make sure each state is 11 significantly different from each other . Remember that 12 all the objects are placed on the desk . 13 """
# Listing 9: Example Prompt For Generating Benchmark Questions (1/2)
|
2306.08651#84
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 85 |
Coreference resolution is to find all expressions (i.e., men- tions) that refer to the same entity or event in a text.
Within-document CR refers to the CR sub-task where all these mentions are in a single document. Mandar et al. [154] initialize LLM-based coreferences resolution by replacing the previous LSTM encoder [208] with BERT. This work is followed by the introduction of SpanBERT [155] which is pre-trained on BERT architecture with a span-based masked language model (MLM). Inspired by these works, Tuan Manh et al. [209] present a strong baseline by incorporat- ing the SpanBERT encoder into a non-LLM approach e2e- coref [208]. CorefBERT leverages Mention Reference Predic- tion (MRP) task which masks one or several mentions and requires the model to predict the masked mentionâs corre- sponding referents. CorefQA [210] formulates coreference resolution as a question answering task, where contextual queries are generated for each candidate mention and the coreferent spans are extracted from the document using the queries. Tuan Manh et al. [211] introduce a gating mech- anism and a noisy training method to extract information from event mentions using the SpanBERT encoder.
|
2306.08302#85
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 85 |
[41] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995â19012, 2022.
[42] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z XU, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems, 35:7575â7586, 2022.
[43] Difei Gao, Ruiping Wang, Ziyi Bai, and Xilin Chen. Env-qa: A video question answering benchmark for comprehensive understanding of dynamic environments. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1675â1685, 2021.
|
2306.08640#85
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 85 |
# Listing 9: Example Prompt For Generating Benchmark Questions (1/2)
After receiving the LLMâs response, we ask it to generate a cleaning action for each state. The purpose of first asking it to generate object states is so that the LLM can generate diverse cleaning actions:
1 """ 2 For each state ( a ) -( e ) , tell me how you would tidy the â object â. 3 Make sure each answer choice is significantly different from each 4 other . Include an option to â leave the object as is â. 5 Each object should be in apostrophes like so : â object â. 6 """
Listing 10: Example Prompt For Generating Benchmark Questions (2/2)
# F.5 Prompt for Real-Robot Code Generation from Language
Following Appendix C, we use the LLM to generate valid API calls for a given natural language action (e.g., âdispose of the coffee cupâ). To do this, we use the following instruction prompt for GPT-3 that defines the interface formally:
|
2306.08651#85
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 86 |
In order to reduce the large memory footprint faced by large LLM-based NER models, Yuval et al. [212] and Raghuveer el al. [213] proposed start-to-end and approxima- tion models, respectively, both utilizing bilinear functions to calculate mention and antecedent scores with reduced reliance on span-level representations.
Cross-document CR refers to the sub-task where the mentions refer to the same entity or event might be across multiple documents. CDML [156] proposes a cross docu- ment language modeling method which pre-trains a Long- former [214] encoder on concatenated related documents and employs an MLP for binary classification to determine whether a pair of mentions is coreferent or not. CrossCR [157] utilizes an end-to-end model for cross-document coref- erence resolution which pre-trained the mention scorer on gold mention spans and uses a pairwise scorer to compare mentions with all spans across all documents. CR-RL [158] proposes an actor-critic deep reinforcement learning-based coreference resolver for cross-document CR.
# 5.3.3 Relation Extraction (RE)
Relation extraction involves identifying semantic relation- ships between entities mentioned in natural language text. There are two types of relation extraction methods, i.e.
15
|
2306.08302#86
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 86 |
[44] Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, and Nan Duan. Cone: An efï¬cient coarse-to-ï¬ne alignment framework for long video temporal grounding. arXiv preprint arXiv:2209.10918, 2022.
[45] Benita Wong, Joya Chen, You Wu, Stan Weixian Lei, Dongxing Mao, Difei Gao, and Mike Zheng Shou. Assistq: Affordance-centric question-driven task completion for egocentric assistant. In European Conference on Computer Vision, pages 485â501. Springer, 2022.
[46] Weixian Lei, Difei Gao, Yuxuan Wang, Dongxing Mao, Zihan Liang, Lingmin Ran, and Mike Zheng Shou. Assistsr: Task-oriented video segment retrieval for personal ai assistant. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 319â338, 2022.
19
|
2306.08640#86
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08302
| 87 |
Relation extraction involves identifying semantic relation- ships between entities mentioned in natural language text. There are two types of relation extraction methods, i.e.
15
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY
sentence-level RE and document-level RE, according to the scope of the text analyzed.
Sentence-level RE focuses on identifying relations be- tween entities within a single sentence. Peng et al. [159] and TRE [215] introduce LLM to improve the performance of relation extraction models. BERT-MTB [216] learns relation representations based on BERT by performing the matching- the-blanks task and incorporating designed objectives for relation extraction. Curriculum-RE [160] utilizes curriculum learning to improve relation extraction models by gradu- ally increasing the difficulty of the data during training. RECENT [217] introduces SpanBERT and exploits entity type restriction to reduce the noisy candidate relation types. Jiewen [218] extends RECENT by combining both the entity information and the label information into sentence-level embeddings, which enables the embedding to be entity- label aware.
|
2306.08302#87
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 87 |
19
[47] Joya Chen, Difei Gao, Kevin Qinghong Lin, and Mike Zheng Shou. Affordance grounding from demonstration video to target image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6799â6808, 2023.
[48] Ronghang Hu, Amanpreet Singh, Trevor Darrell, and Marcus Rohrbach. Iterative answer prediction with pointer-augmented multimodal transformers for textvqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9992â10002, 2020.
[49] Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, and Jiebo Luo. Tap: Text-aware pre-training for text-vqa and text-caption. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8751â8761, 2021.
[50] Difei Gao, Ke Li, Ruiping Wang, Shiguang Shan, and Xilin Chen. Multi-modal graph neu- ral network for joint reasoning on vision and scene text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
2306.08640#87
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08651
| 87 |
12 interface R o b o tMa n ip ula ti on I nt er fa c e { 13 14 // Leaves the < object > alone func leave_alone ( object : str ) -> None ; 15 16 17 18 // Sets the " designated receptacle " for // following actions --> * stateful * func set_designated ( receptacle : str ) -> None ; 19 20 21 22 // Relocates / gathers the < object > and moves it to the // designated receptacle func relocate ( object : str ) -> None ; 23 24 25 26 // Discards the < object > by placing it in the // designated receptacle func cleanup ( object : str ) -> None ; 27 28 29 // Signals end of execution func done () -> None ; 30 } 31 // Create a â robot â ( callable instance of interface ) robot = Ro bo t Man i pu lat ion In t er fa ce () ; """ 32 33 34 35 ) 36 37 API_DOCS = ( """ 38 You can invoke a given function on the robot by calling â robot . < func >(" object name ") . For example : â robot . set_designated_area (" recycling bin ") â. 39 40 41 42 43 44 The API also enables multiple function invocations ( separated by newlines ) . 45 46 47 Note that
|
2306.08651#87
|
Toward Grounded Social Reasoning
|
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not socially appropriate to
disassemble the sports car and put it away as part of the "tidying". How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable social reasoning, grounding this reasoning in the
real world has been challenging. To reason in the real world, robots must go
beyond passively querying LLMs and *actively gather information from the
environment* that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded social reasoning. To evaluate our framework at
scale, we release the MessySurfaces dataset which contains images of 70
real-world surfaces that need to be cleaned. We additionally illustrate our
approach with a robot on 2 carefully designed surfaces. We find an average
12.9% improvement on the MessySurfaces benchmark and an average 15% improvement
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/groundedsocialreasoning.
|
http://arxiv.org/pdf/2306.08651
|
Minae Kwon, Hengyuan Hu, Vivek Myers, Siddharth Karamcheti, Anca Dragan, Dorsa Sadigh
|
cs.RO, cs.AI
| null | null |
cs.RO
|
20230614
|
20230614
|
[
{
"id": "1606.06565"
},
{
"id": "2008.02275"
},
{
"id": "2303.00001"
},
{
"id": "2305.06500"
},
{
"id": "2201.05320"
}
] |
2306.08302
| 88 |
Document-level RE (DocRE) aims to extract relations between entities across multiple sentences within a docu- ment. Hong et al. [219] propose a strong baseline for DocRE by replacing the BiLSTM backbone with LLMs. HIN [220] use LLM to encode and aggregate entity representation at different levels, including entity, sentence, and document levels. GLRE [221] is a global-to-local network, which uses LLM to encode the document information in terms of entity global and local representations as well as context relation representations. SIRE [222] uses two LLM-based encoders to extract intra-sentence and inter-sentence relations. LSR [223] and GAIN [224] propose graph-based approaches which induce graph structures on top of LLM to better extract relations. DocuNet [225] formulates DocRE as a semantic segmentation task and introduces a U-Net [226] on the LLM encoder to capture local and global dependencies between entities. ATLOP [227] focuses on the multi-label problems in DocRE, which could be handled with two techniques, i.e., adaptive thresholding for classifier and localized con- text pooling for LLM. DREEAM
|
2306.08302#88
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 88 |
[51] Stan Weixian Lei, Difei Gao, Jay Zhangjie Wu, Yuxuan Wang, Wei Liu, Mengmi Zhang, and Mike Zheng Shou. Symbolic replay: Scene graph as prompt for continual learning on vqa task. arXiv preprint arXiv:2208.12037, 2022.
[52] OpenAI. Gpt-4 technical report, 2023.
[53] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
[54] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
[55] Deyao Zhu, Jun Chen, Xiaoqian Shen, xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023.
|
2306.08640#88
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
2306.08302
| 89 |
in DocRE, which could be handled with two techniques, i.e., adaptive thresholding for classifier and localized con- text pooling for LLM. DREEAM [161] further extends and improves ATLOP by incorporating evidence information. End-to-End KG Construction. Currently, researchers are exploring the use of LLMs for end-to-end KG construction. Kumar et al. [95] propose a unified approach to build KGs from raw text, which contains two LLMs powered components. They first finetune a LLM on named entity recognition tasks to make it capable of recognizing entities in raw text. Then, they propose another â2-model BERTâ for solving the relation extraction task, which contains two BERT-based classifiers. The first classifier learns the relation class whereas the second binary classifier learns the direc- tion of the relations between the two entities. The predicted triples and relations are then used to construct the KG. Guo et al. [162] propose an end-to-end knowledge extraction model based on BERT, which can be applied to construct KGs from Classical Chinese text. Grapher [41] presents a novel end-to-end multi-stage
|
2306.08302#89
|
Unifying Large Language Models and Knowledge Graphs: A Roadmap
|
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves
in the field of natural language processing and artificial intelligence, due to
their emergent ability and generalizability. However, LLMs are black-box
models, which often fall short of capturing and accessing factual knowledge. In
contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are
structured knowledge models that explicitly store rich factual knowledge. KGs
can enhance LLMs by providing external knowledge for inference and
interpretability. Meanwhile, KGs are difficult to construct and evolving by
nature, which challenges the existing methods in KGs to generate new facts and
represent unseen knowledge. Therefore, it is complementary to unify LLMs and
KGs together and simultaneously leverage their advantages. In this article, we
present a forward-looking roadmap for the unification of LLMs and KGs. Our
roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs,
which incorporate KGs during the pre-training and inference phases of LLMs, or
for the purpose of enhancing understanding of the knowledge learned by LLMs; 2)
LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding,
completion, construction, graph-to-text generation, and question answering; and
3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a
mutually beneficial way to enhance both LLMs and KGs for bidirectional
reasoning driven by both data and knowledge. We review and summarize existing
efforts within these three frameworks in our roadmap and pinpoint their future
research directions.
|
http://arxiv.org/pdf/2306.08302
|
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu
|
cs.CL, cs.AI
|
A short version of this paper was accepted by IEEE Transactions on
Knowledge and Data Engineering (TKDE)
|
IEEE Transactions on Knowledge and Data Engineering (TKDE) 2024
|
cs.CL
|
20230614
|
20240125
|
[
{
"id": "2309.01538"
},
{
"id": "2302.13971"
},
{
"id": "2110.08173"
},
{
"id": "2203.16747"
},
{
"id": "2201.05337"
},
{
"id": "2302.12095"
},
{
"id": "1810.04805"
},
{
"id": "2305.13168"
},
{
"id": "2305.12392"
},
{
"id": "2206.14268"
},
{
"id": "2111.08546"
},
{
"id": "2212.10511"
},
{
"id": "2107.02137"
},
{
"id": "2105.10311"
},
{
"id": "2308.09729"
},
{
"id": "2310.02129"
},
{
"id": "1910.12840"
},
{
"id": "2206.13163"
},
{
"id": "2303.11146"
},
{
"id": "2009.02835"
},
{
"id": "2205.07424"
},
{
"id": "1909.03193"
},
{
"id": "2010.15980"
},
{
"id": "2007.00655"
},
{
"id": "2203.11171"
},
{
"id": "2306.06427"
},
{
"id": "2305.08281"
},
{
"id": "2104.08696"
},
{
"id": "2110.08455"
},
{
"id": "2305.09645"
},
{
"id": "2310.01061"
},
{
"id": "2308.03688"
},
{
"id": "2305.01157"
},
{
"id": "2310.08975"
},
{
"id": "2301.08913"
},
{
"id": "2305.13172"
},
{
"id": "2212.13428"
},
{
"id": "2303.10368"
},
{
"id": "2307.07697"
},
{
"id": "2308.11730"
},
{
"id": "2108.01928"
},
{
"id": "2010.00711"
},
{
"id": "2304.10592"
},
{
"id": "2303.18223"
},
{
"id": "2304.10149"
},
{
"id": "2307.12976"
},
{
"id": "2309.03118"
},
{
"id": "2304.13712"
},
{
"id": "2212.01588"
},
{
"id": "2309.01219"
},
{
"id": "2302.04023"
},
{
"id": "2202.08772"
},
{
"id": "2208.02743"
},
{
"id": "2201.08239"
},
{
"id": "2201.08531"
},
{
"id": "2302.05019"
},
{
"id": "2003.10555"
},
{
"id": "1907.11692"
},
{
"id": "2201.04843"
},
{
"id": "2206.12617"
},
{
"id": "2201.05575"
},
{
"id": "2310.07984"
}
] |
2306.08640
| 89 |
[56] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
[57] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39â48, 2016.
[58] Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. Learning to reason: End-to-end module networks for visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 804â813, 2017.
[59] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Inferring and executing programs for visual reasoning. In Proceedings of the IEEE international conference on computer vision, pages 2989â2998, 2017.
|
2306.08640#89
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
http://arxiv.org/pdf/2306.08640
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, Mike Zheng Shou
|
cs.CV
|
Project page: https://showlab.github.io/assistgpt/
| null |
cs.CV
|
20230614
|
20230628
|
[
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1810.04805"
},
{
"id": "2305.06355"
},
{
"id": "2302.04761"
},
{
"id": "2112.09332"
},
{
"id": "2201.11903"
},
{
"id": "2204.00598"
},
{
"id": "2212.00280"
},
{
"id": "2112.08614"
},
{
"id": "2304.04370"
},
{
"id": "2107.03374"
},
{
"id": "2303.16434"
},
{
"id": "2303.08128"
},
{
"id": "2303.03378"
},
{
"id": "2205.10747"
},
{
"id": "2303.05499"
},
{
"id": "2211.09699"
},
{
"id": "2205.14100"
},
{
"id": "2212.09522"
},
{
"id": "2303.11381"
},
{
"id": "2301.12597"
},
{
"id": "2210.03629"
},
{
"id": "2303.04671"
},
{
"id": "2212.04356"
},
{
"id": "2303.17580"
},
{
"id": "1908.07490"
},
{
"id": "2304.09842"
},
{
"id": "2208.12037"
},
{
"id": "2209.10918"
},
{
"id": "2305.06500"
},
{
"id": "2211.11559"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.