doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06924
| 54 |
14
(moving the game sprite), and A3 is Mediâs action space (displaying lists of message options for Alice to choose from). The teamâs score in the game, R, is deï¬ned by Bobâs score in the single-player video game minus the attentional cost of the messages Alice wrote. So, Medi does a good job if she conveys useful information from Alice to Bob, at low attentional cost to Alice.
If we can develop good solutions to this sort of problem, numerous possibilities open up, including potentially saving Alice a lot of time on writing emails. But to push the science speciï¬cally toward better mediation tools, a natural next step would be to try experiments with with a symmetrized version of the game, where both Alice and Bob have goals and can take actions that aï¬ect both of their goals, and are assisted by an AI mediator Medi who can write suggested messages for both of them. Medi could sometimes send a message to Alice and Bob simultaneously, to create a âcontractâ between them if they both agree to it.
In order to reduce the risk of conï¬ict over the control of Negotiable controls for powerful systems. powerful AI systems or other systems, it would be prudent to develop formal, AI-compatible principles for sharing control of powerful processes.
|
2306.06924#54
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 54 |
[ZSZ+20] Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. Side-tuning: a baseline for network adaptation via additive side networks. In Computer VisionâECCV 2020: 16th European Conference, Glasgow, UK, August 23â28, 2020, Proceedings, Part III 16, pages 698â714. Springer, 2020.
12
# A Inference Efficiency and GPU-Memory Efficiency
When the model is required to comprehend long sequences, the proposed method LONGMEM can load the out-of-boundary inputs into the cached memory as previous context. Thus, the memory usage and inference speed can be significantly improved compared with vanilla self-attention-based models. The detailed statistics in terms of the efficiency is presented in Table 6.
Model In-Context Len. In-Memory Len. Inference Speed GPU-Memory Usage (tokens/s)â (MBs)â GPT-2* LONGMEM 4k 1k N/A 3k 14666 22638 20671 13335 GPT-2* LONGMEM 8k 1k N/A 7k 8417 21343 54195 13437
Table 6: The superiority of our method over fully dense self-attention (GPT-2*) in terms of inference speed and GPU-memory utilization.
# B Training Details
|
2306.07174#54
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 54 |
'function2': calculate stock indexâ, 'output2': 'result4', 'description2': ' Time series data of Bank of Chengdu KDJ indicator '} step3=: {'argl': ['result3', 'result4'], 'function1': 'merge indicator for same_stock', âoutputl': 'resultS', escription1': ' Bank of Chengdu Candlestick Indicator and KDJ Indicator Combined Time Series Data } step4=: {'arg': ['inputl', 'The K plot and KDJ index for chengduyinhang from 2022/06/06 to 2023/06/06 '], âfunctionâ: 'plot_k lineâ, âoutputâ: 'resultl', 'description': ' Plotting candlestick and KDJ charts '} Step4 Step3 Step2 result! Step1 results esult3 calculate_stock get_stock_prices REBRG merge_indicator_for _index _ data daily _same_stock ⢠calculate_stock get_stock_technical 20220606" result index ~ data 20230606" Workflow _" result = 20-day ma 2 swore
|
2306.07209#54
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 55 |
There is an interesting tension in this area, between fairness and successful negotiation. Suppose Alice and Bob are negotiating a deal to control a powerful system, and a mediator Medi is assisting in the negotiation. Medi may be able to ï¬nalize the deal by proposing a plan thatâs great for Alice but potentially terrible for Bob, in a way that Bob is unable to recognize in advance. (Betting is a simple example of this: a bet looks good to both parties, but can only carry positive expected value for one of them in reality.) This seems somewhat unfair to Bob. On the other hand, if Medi doesnât propose plans that look appealing from Bobâs subjective perspective, Bob might walk away from the bargaining table.
|
2306.06924#55
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 55 |
Table 6: The superiority of our method over fully dense self-attention (GPT-2*) in terms of inference speed and GPU-memory utilization.
# B Training Details
The pre-training of reproduced GPT-2* iterates on 117B tokens in total, with 512 batch-size and 1024-token fixed segment-length. The Adam optimizer [KB15] is adopted in memory-augmented adaptation training. The pre-training and adaptation are trained on 16 32GB-Tesla-V100 GPUs. Other detailed training hypperparamters and settings are presented in Table 7.
LONGMEM
Reproduced GPT-2* Backbone LLM Hyperparameters Parameters Precision Layers Hidden dim. Attention heads Head Dim Vocab size Sequence length Position emb. Tied embedding
# SideNet Hyperparameters 151M float16 12 1024 16 64 1024
Parameters Precision Layers Hidden dim. Attention heads Head Dim Sequence length
# Memory-Augmented Adaptation Hyperparameters
Global Batch Size Learning rate Total tokens Warmup tokens LR Decay style Adam (β1, β2) Adam eps Weight decay Gradient clipping 256 2.0e-4 26B 0 polynomial (0.9, 0.98) 1e-06 0.01 2.0
Table 7: Memory-Augmented Adaptation and Architectural Hyperparameters.
13
# C Prompting Templates
|
2306.07174#55
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.06924
| 56 |
Hence, there is sometimes a fundamental trade-oï¬ between a deal looking good to both Alice and Bob, and the deal treating Alice and Bob equitably over time (Critch and Russell, 2017). This trade-oï¬ can be seen in the behavior of reinforcement learning systems that are Pareto optimal for principals with diï¬erent beliefs (Critch, 2017; Desai et al., 2018). The only way to eliminate this trade-oï¬ is to eliminate the diï¬erences in beliefs between the principals. For that, perhaps progress in building mediation tools would be a useful start, or control techniques for powerful AI systems that can explicitly account for diï¬erences in beliefs among a committee of humans controlling a single system, such as in Dalrympleâs âOpen Agency Architectureâ concept (Dalrymple, 2022).
# 3 Conclusion
|
2306.06924#56
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07174
| 56 |
Table 7: Memory-Augmented Adaptation and Architectural Hyperparameters.
13
# C Prompting Templates
We present all hand-crafted in-context learning prompting templates and labels for 5 NLU datasets and Squad QA dataset in Tabel 8.
Task Prompt Labels SST-2 Review: [Sentence] Sentiment: [Label] {positive, negative} MR Review: [Sentence] Sentiment: [Label] {positive, negative} MPQA Review: [Sentence] Sentiment: [Label] {positive, negative} SST-5 input: [Sentence] type: [Label] {terrible,bad,okay,good,great} Subj input: [Sentence] type: [Label] {objective, subjective} Squad Passage: [Passage]
Question: [Question] Answer: [Answer]
Table 8: The hand-crafted prompts used to query the model predictions on the zero-shot evaluation of 5 NLU datasets and one question-answering dataset Squad.
14
|
2306.07174#56
|
Augmenting Language Models with Long-Term Memory
|
Existing large language models (LLMs) can only afford fix-sized inputs due to
the input length limit, preventing them from utilizing rich long-context
information from past inputs. To address this, we propose a framework, Language
Models Augmented with Long-Term Memory (LongMem), which enables LLMs to
memorize long history. We design a novel decoupled network architecture with
the original backbone LLM frozen as a memory encoder and an adaptive residual
side-network as a memory retriever and reader. Such a decoupled memory design
can easily cache and update long-term past contexts for memory retrieval
without suffering from memory staleness. Enhanced with memory-augmented
adaptation training, LongMem can thus memorize long past context and use
long-term memory for language modeling. The proposed memory retrieval module
can handle unlimited-length context in its memory bank to benefit various
downstream tasks. Typically, LongMem can enlarge the long-form memory to 65k
tokens and thus cache many-shot extra demonstration examples as long-form
memory for in-context learning. Experiments show that our method outperforms
strong long-context models on ChapterBreak, a challenging long-context modeling
benchmark, and achieves remarkable improvements on memory-augmented in-context
learning over LLMs. The results demonstrate that the proposed method is
effective in helping language models to memorize and utilize long-form
contents. Our code is open-sourced at https://aka.ms/LongMem.
|
http://arxiv.org/pdf/2306.07174
|
Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei
|
cs.CL
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2301.12866"
},
{
"id": "1901.02860"
},
{
"id": "2101.00027"
},
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2206.07682"
},
{
"id": "1606.05250"
},
{
"id": "1911.05507"
},
{
"id": "2108.12409"
},
{
"id": "2206.06522"
},
{
"id": "2204.10878"
},
{
"id": "2211.05100"
},
{
"id": "2201.11903"
},
{
"id": "2205.10178"
},
{
"id": "2205.01068"
}
] |
2306.07209
| 56 |
Request: I want to see the current news or the latest news Intent Detecting Stage: Display the latest financial news and market developments on financial websites WorkFlow Planning : step1=: {'argl': ['sina'], 'function1': 'get_latest_new_from_web', 'outputl': 'resultl', 'description!': ' The latest financial news data '} step2=: {'arg': ['inputl',' The latest financial news and market developments ', False], 'function': âprint_save tableâ, 'output': 'result1', 'description': ' Scroll to show news '} Step2 Step1 result] Workflow print_save table _}+ââââââ get _latest_new_from_web #ââ sina datetine content [datetime]: 2623-06-07 64:04:14 [content]: RNRMGAGH (HX) WELAIO.4275, HAEO.03%, 1993573.20K: HERSOONMCAGH (BX) KALA. 140K, HNHO.21%, 1914282.99R; MIRTATER AHEM [datetime]: 2623-06-87 64:02:19 [content]: REMRGNSRREMâ (GASH) AMAMETIZZRT, LAH
|
2306.07209#56
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 57 |
# 3 Conclusion
At this point, it is clear that AI technology can pose large-scale risks to humanity, including acute harms to individuals, large-scale harms to society, and even human extinction. Problematically, there may be no single accountable party or institution that primarily qualiï¬es as blameworthy for such harms (Type 1). Even when there is a single accountable institution, there are several types of misunderstandings and intentions that could lead it to harmful outcomes (Types 2-6). These risk types include AI impacts that are bigger than expected, worse than expected, willfully accepted side eï¬ects of other goals, or intentional weaponization by criminals or states. For all of these risks, a combination of technical, social, and legal solutions are needed to achieve public safety.
15
# References
Amodei, D., C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016). Concrete problems in
AI safety. arXiv preprint arXiv:1606.06565 .
Armstrong, S. and B. Levinstein (2017). Low impact artiï¬cial intelligences. arXiv preprint arXiv:1705.10720 .
(link).
# Barak, B.
(2002).
# Can we
# obfuscate programs.
|
2306.06924#57
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 57 |
AHEM [datetime]: 2623-06-87 64:02:19 [content]: REMRGNSRREMâ (GASH) AMAMETIZZRT, LAH (GAZA) AURA, MAMNNATOIIET, MAASADRM. (datetime): 2923-06-07 04:01:34 [content]: [AI HEAR REMA7N] TAU HMEIRATHE, GERAIS, WEAN ARMMON, OMRORIE MBRKADSK, AUK BM, mRERKATIN, HR, BER SOO, [datetime]: 2023-06-07 64:01:12 [content]: GRR=AARURIRUCHK ARTARRSR LAK I= AIMNRIRBO, SBAEHHO.038, SANRIO. 36%, ATEFSOOIRERIKO.238, AA TRIRBLSRLERK, AMDBHRSR, JRIFARRKRER, [datetime]: 2623-06-97 63:56:52 [content]: [S0OVFAHARMEDEEMKS, ERPRORARIANIMIE] FAHSIC (OLE) MMRAMMIAU RSH Poo TA SRR IETaycanEstiRel, ALR ([datetine]: 2923-06-07
|
2306.07209#57
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 58 |
(link).
# Barak, B.
(2002).
# Can we
# obfuscate programs.
Retreived From http://www. math. ias/edu/boaz/Papers/obfâinformal. html 7. (link).
Barrat, J. (2013). Artiï¬cial intelligence and the end of the human era. New York: Thomas Dunne.
Bengio, Y. (2023). How rogue AIs may arise. (link).
Bengio, Y. and A. Ng (2023). Untitled video statement calling for articulation of concrete cases of harm and
extinction. (link).
Bernstein, D. S., R. Givan, N. Immerman, and S. Zilberstein (2002). The complexity of decentralized control of markov decision processes. Mathematics of operations research 27 (4), 819â840. INFORMS. (link).
Bitansky, N. and V. Vaikuntanathan (2018). Indistinguishability obfuscation from functional encryption.
Journal of the ACM (JACM) 65 (6), 1â37. ACM New York, NY, USA. (link).
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
|
2306.06924#58
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 58 |
FAHSIC (OLE) MMRAMMIAU RSH Poo TA SRR IETaycanEstiRel, ALR ([datetine]: 2923-06-07 63:52:58 [content]: RINE: REUNENIAARA TERRA Te SIONS. [datetime]: 2923-06-07 03:52:43 [content]: HHÂ¥R/SERA MRNAS 0x, 126950385, [datetime]: 2623-06-07 63:52:34 [content]: [FREENROWEEEAGSO THA, WRRMBLERERINNEARMOR 202G% MAAR] ALT (6AGE) MURR, RMON AMIMEE LAO. EA, 13.0928 [datetime]: 2623-06-67 63:43:49 [content]: [SMHATAMAMEAG.98, TAMSERIGIA.7e] A= (ACH) MAORM, SMAKMEMO.77%, 143.7189, COOTRAMIAAA.678, 16.07-2/2507%/HA [datetime]: 2923-06-07 63:35:50 [content]: HAIRY (Apolo Global Management) ACEO Marc Rowan: Si@Sâ! SURStRVER", SRT IRINSERLANE,
|
2306.07209#58
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 59 |
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press.
Butler, S. (1863). Darwin among the machines. The Press (Christchurch, New Zealand) June 13.
Capek, K. (1920). R.U.R. (Rossumâs Universal Robots). Aventinum.
Carey, R. (2018). Incorrigibility in the cirl framework. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 30â35. (link).
Carpenter, D. and D. A. Moss (2013). Preventing regulatory capture: Special interest inï¬uence and how to limit it. Cambridge University Press.
Center for AI Safety (2023). Statement on AI risk. (link).
Critch, A. (2017). Toward negotiable reinforcement learning: shifting priorities in pareto optimal sequential
decision-making. arXiv preprint arXiv:1701.01302 . (link).
Critch, A. and S. Russell (2017). Servant of many masters: Shifting priorities in pareto-optimal sequential
decision-making. arXiv preprint arXiv:1711.00363 . (link).
|
2306.06924#59
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 59 |
[content]: HAIRY (Apolo Global Management) ACEO Marc Rowan: Si@Sâ! SURStRVER", SRT IRINSERLANE, TMRIROORTE NOS MRIEN, [datetime]: 2023-06-07 03:26:00 [content]: MOEETHIGRKH, LLASTIEMARS.00%, MH26934307/H, DUAMREMA, 121882387 /, [datetime]: 2923-06-87 63:11:27 [content]: MANE: HSIRRedd TEM RSME SAAT. [datetime]: 2923-06-87 63:07:37 [content]: SeBIBATRAA HUI F, MRTRTOAIGRERT. LENTRNES Rooaate SMES [datetine]: 2023-06-07 03:03:26 [content]: HMARMRSTIRA0S:OOWIR7.1180, SERS ARRMAIITIA, MERMAS?. SSI. [datetime]: 2623-06-07 63:03:14 [content]: HIRNISRMBA.918, #19351, MALSIAL (datetime): 2923-06-07 3:09:37 [content]: ENE: HFROMOEN, AUALMERTAMARPMLIOR, SORT
|
2306.07209#59
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 60 |
decision-making. arXiv preprint arXiv:1711.00363 . (link).
Cyberspace Administration of China (2023). Measures for the management of generative artiï¬cial intelligence services (draft for comment). (link).
Dal Bó, E. (2006). Regulatory capture: A review. Oxford review of economic policy 22 (2), 203â225. Oxford University Press.
Dalrymple, D. A. (2022). An open agency architecture for safe transformative AI. AI Alignment Forum.
(link).
Desai, N., A. Critch, and S. J. Russell (2018). Negotiable reinforcement learning for pareto optimal sequential decision-making. In Advances in Neural Information Processing Systems, pp. 4712â4720. (link).
Ed Yong (2013). Trees trap ants into sweet servitude. National Geographic. (link).
European Commission (2021). Regulation of the european parliament and of the council: Laying down harmonised rules on artiï¬cial intelligence (artiï¬cial intelligence act) and amending certain union legislative acts. (link).
16
|
2306.06924#60
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 60 |
MALSIAL (datetime): 2923-06-07 3:09:37 [content]: ENE: HFROMOEN, AUALMERTAMARPMLIOR, SORT IAT, (AMR RSA ARID EMSS LARAMIE, FDL: Request: Introduce the Bank of Chengdu, what is the company, what is the main business Intent Detecting Stage: Briefly introduce the listed company Bank of Chengdu and its main business WorkFlow Planning : stepl=: {'argl': ['KRABERAF'], 'functionl': 'get_company_infoâ, 'outputl': 'resultl', 'description1': ' Information of Bank of Chengdu '} step2=: {'arg': ['inputl', 'AKABERG REABK(S B', True], 'function': 'print_save tableâ, âoutputâ: None, âdescription': ' Information of Bank of Chengdu ' Step2 resultl Step] Workflow print_save_table ©}+ââââ+ get_company_info ââ BRABERAT BRS v BUR 4 BATT 4 BARR 4 BRE 6 Bh 4 HAA 6 HE RAB RAT 601838.SH SSE =e ER BE 373573.626 1997050: AEM 4 ARENT 4 AER
|
2306.07209#60
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 61 |
16
Garg, S., C. Gentry, S. Halevi, M. Raykova, A. Sahai, and B. Waters (2016a). Candidate indistinguishability obfuscation and functional encryption for all circuits. SIAM Journal on Computing 45 (3), 882â929. SIAM. (link).
Garg, S., C. Gentry, S. Halevi, M. Raykova, A. Sahai, and B. Waters (2016b). Hiding secrets in software: A cryptographic approach to program obfuscation. Communications of the ACM 59 (5), 113â120. ACM New York, NY, USA. (link).
Gilmore, A. B., A. W. Gallagher, and A. Rowell (2019). Tobacco industryâs elaborate attempts to control a global track and trace system and fundamentally undermine the illicit trade protocol. Tobacco Control 28 (2), 127â140. BMJ Publishing Group Ltd. (link).
Good, I. J. (1966). Speculations concerning the ï¬rst ultraintelligent machine. In Advances in computers, Volume 6, pp. 31â88. Elsevier.
|
2306.06924#61
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.06924
| 62 |
Hadï¬eld-Menell, D., S. J. Russell, P. Abbeel, and A. Dragan (2016). Cooperative inverse reinforcement learning. In Advances in neural information processing systems, pp. 3909â3917. (link).
Hibbard, B. (2001). Super-intelligent machines. ACM SIGGRAPH Computer Graphics 35 (1), 11â13. ACM New York, NY, USA.
Huang, S., H. Toner, Z. Haluza, and R. Creemers (2023). Translation: Measures for the management of generative artiï¬cial intelligence services (draft for comment). digichina.stanford.edu. (link).
Huang, S. H., M. Zambelli, J. Kay, M. F. Martins, Y. Tassa, P. M. Pilarski, and R. Hadsell (2019). Learning gentle object manipulation with curiosity-driven deep reinforcement learning. arXiv preprint arXiv:1903.08542 . (link).
Krakovna, V., L. Orseau, M. Martic, and S. Legg (2018). Measuring and avoiding side eï¬ects using relative reachability. arXiv preprint arXiv:1806.01186 . (link).
|
2306.06924#62
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 62 |
Step2 resultl Step] Workflow print_save_table ©}+ââââ+ get_company_info ââ BRABERAT BRS v BUR 4 BATT 4 BARR 4 BRE 6 Bh 4 HAA 6 HE RAB RAT 601838.SH SSE =e ER BE 373573.626 1997050: AEM 4 ARENT 4 AER aT BE PSR HRT. ASBRARA ERR, BASE. MAM
Figure 9: Data-Copilot can provide the latest financial news and company information by deploying the corresponding workflows.
15
# A Case for Interface design
|
2306.07209#62
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 63 |
Krueger, D., T. Maharaj, S. Legg, and J. Leike (2019). Misleading meta-objectives and hidden incen-tives for distributional shift. (link).
Lee, W.-S., D. L. Grosh, F. A. Tillman, and C. H. Lie (1985). Fault tree analysis, methods, and applications: a review. IEEE transactions on reliability 34 (3), 194â203. IEEE.
Indistinguishability obfuscation from ddh-like assumptions on constant-degree graded encodings. In 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), pp. 11â20. IEEE. (link).
Matheny, J. (2023). Artiï¬cial intelligence challenges and opportunities for the department of defense. Testimony presented to the U.S. Senate Committee on Armed Services, Subcommittee on Cybersecurity, on April 19, 2023. (link).
Mearns, A. (1965). Fault tree analysis- the study of unlikely events in complex systems(fault tree analysis as tool to identify component failure as probable cause of undesired event in complex system). In System Safety Symposium, Seattle, Wash, pp. 1965.
|
2306.06924#63
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 63 |
Figure 9: Data-Copilot can provide the latest financial news and company information by deploying the corresponding workflows.
15
# A Case for Interface design
Design prompt: You are an experienced program designer, I propose a request and you can use the interface in interface library to solve the request or design new interface functions (similar to pseudocode) to implement my request. Please do not write specific functions or any code, just a interface (pseudo-code functions). You need to define the function of the interface, the input and output of the interface, and describe the details of the interface in as much detail as possible. Finally please tell how to implement my request in language and put new designed interface function in interface_libary, donât miss any one. Please answer in following format: Interface function Library: Interfacel= {Interface Name: %s, Function description:%s, Input and Output:%s}, Interface2={Interface Name: %s, Function description:%s, Input and Output:%s}.,..}. The solution using these interface function: solution=[%s]
Request: Get the daily opening price data of Bank of Hangzhou from Jan 23, 2018 to Mar 13, 2019 and plot the trend. Please do not write the code, just design the pseudo code function in function_libary
# Response:
|
2306.07209#63
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 64 |
Milli, S. and A. D. Dragan (2019). Literal or pedagogic human? analyzing human model misspeciï¬cation in objective learning. arXiv preprint arXiv:1903.03877 . (link).
National Institute of Standards and Technology (2022). Ai risk management framework: Initial draft. (link).
Olah, C., A. Satyanarayan, I. Johnson, S. Carter, L. Schubert, K. Ye, and A. Mordvintsev (2018). The building blocks of interpretability. Distill 3 (3), e10. (link).
Raji, I. D., A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes (2020). Closing the AI accountability gap: deï¬ning an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 33â44. (link).
17
Raphael, S. and R. Winter-Ebmer (2001). Identifying the eï¬ect of unemployment on crime. The Journal of
Law and Economics 44 (1), 259â283. The University of Chicago Press. (link).
|
2306.06924#64
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 64 |
# Response:
First, we need to design a function to get the opening price data from the bankâs database. Then, we need another function to process this data and prepare them for graphical representation. Finally, we need a function to plot the trend chart.
# Interface function
Library Interface 1={Interface Name: getBankData....} Interface 2= {Interface Name: prepareDataForPlotting....} Interface 3={Interface Name: plotData, ...} Solution=[These three interface functions will meet your needs, and they are all stored in the interface_library. In your program, you would first get the data using getBankData, then process the data using prepareDataForPlotting, and finally use plotData to plot the trend.....]
Request: Get the monthly closing price data of Ningde Times from January 23, 2018 to March 13, 2019, and plot the trend
Response: To meet this demand, we can make a slight modification in the existing function_library... Interface 1={Interface Name: getBankData...} Interface 2= {Interface Name: prepareDataForPlotting...} Interface 3={Interface Name: plotData....} Interface 4={Interface Name: getStockData....}
Merge Solution=[ You need to call these functions in order to meet your needs. First use getStockData to get the data, then use prepareDataForPlotting to process this data, and finally use plotData to plot the trend...]
|
2306.07209#64
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 65 |
Law and Economics 44 (1), 259â283. The University of Chicago Press. (link).
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1 (5), 206â215. Nature Publishing Group. (link).
Russell, S. (2014). White paper: Value alignment in autonomous systems.
Russell, S. (2019). Human compatible: Artiï¬cial intelligence and the problem of control. Penguin. (link).
Semenova, L. and C. Rudin (2019). A study in rashomon curves and volumes: A new perspective on
generalization and model simplicity in machine learning. arXiv preprint arXiv:1908.01755 . (link).
Shah, R., D. Krasheninnikov, J. Alexander, P. Abbeel, and A. Dragan (2019). Preferences implicit in the
state of the world. arXiv preprint arXiv:1902.04198 . (link).
Shapiro, D. and R. Shachter. User-agent value alignment. (link).
Shapiro, D. G. (2011). The social agency problem. In 2011 AAAI Fall Symposium Series. (link).
|
2306.06924#65
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 65 |
Merge_prompt: Please check if the functions in the interface library you designed can be merged. Especially for newly designed interfaces in the previous step, ,merging interfaces with similar functions and similar input and output formats into a new interface. You can use argument to control different inputs and outputs. Please export the merged interface library as following:
Interface function Library: Interfacel={Interface Name: %s, Function description:%s, Input and Output:%s}, Interface2={Interface Name: %s, Function description:%s, Input and Output:%s}...}. The merge solution: Merge Solution=[%s]
# Iteration 2--Merge
Merge Solution=[ Yes, function_library can be merged and reformulated to eliminate duplicate functions. We can merge getBankData and getStockData into a more generic function getFinancialData, which can fetch any financial data, including bank and stock data. In addition, we can keep the prepareDataForPlotting and plotData functions, as they can also be applied to different types of financial data]
Interface 1={Interface Name: getFinancialData...} Interface 2={Interface Name: prepareDataForPlotting...} Interface 3={Interface Name: plotData....}
Figure 10: A simple example of iterative interface design, including interface definition and interface merging.
16
|
2306.07209#65
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 66 |
Shapiro, D. G. (2011). The social agency problem. In 2011 AAAI Fall Symposium Series. (link).
Sheehan, M. (2021). Chinaâs new AI governance initiatives shouldnât be ignored. The Carnegie Endowment
for International Peace. (link).
Soares, N. and B. Fallenstein (2014). Aligning superintelligence with human interests: A technical research
agenda. Machine Intelligence Research Institute (MIRI) technical report 8. Citeseer. (link).
Sunak, R. and J. Biden (2023). President Biden and U.K. Prime Minister Rishi Sunak hold news conference
at White House | full video. CBS News. (link).
Taylor, J., E. Yudkowsky, P. LaVictoire, and A. Critch (2016). Alignment for advanced machine learning systems. Machine Intelligence Research Institute. (link).
# Turing, A. (1951a, May). Can digital computers think? Third Programme. BBC.
Turing, A. (1951b). Intelligent Machinery, A Heretical Theory (c.1951). Reprinted in The Essential Turing, by B. Jack Copeland., 2004. Oxford University Press. (link).
|
2306.06924#66
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 66 |
Prompt for Interface dispatch Analysis prompt: Please parse the input instruction for time, place, object and output format and rewrite the instruction according to todayâs date, the rewritten new instruction must be semantically consistent and contain a specific time and specific indicators. Use the following format to output{Time:%s, Location :%s, Object :%s, Format :%s}. Demonstration 1: ### Instruction: Todayâs date is 2019-03-13,please help me plot the stock price trend of Guizhou Maotai from January 23, 2018 to today. ###New Instruction: Please help me plot the closing price of Guizhou Maotai from Jan 23, 2018 to Mar 13, 2019 (Time:20180123-20190313, Location: China, Object: Guizhou Maotai stock price, Format: Line) Select_prompt: Please select the most suitable task according to the given instruction and generate its task_instruction in the format of task={task_name: task_instruction}. There are four types of optional tasks. {fund_task]: used to extract and process tasks about all public funds. [stock_task]: for extracting and processing tasks about all stock prices, index information, company financials, etc., [economic_task]: for
|
2306.07209#66
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06924
| 67 |
Turner, A. M., D. Hadï¬eld-Menell, and P. Tadepalli (2020). Conservative agency via attainable utility preservation. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 385â391. (link).
US Senate Judiciary Committee (2023). US Senate Judiciary Committee Hearing on Oversight of A.I. (video footage). (link).
US Senate Subcommittee on Privacy, Technology, and the Law (2022). Compute accounting principles can
help reduce AI risks. (link).
US Senate Subcommittee on Privacy, Technology, and the Law (2023). Oversight of A.I.: Rules for Artiï¬cial
Intelligence. (link).
Watson, H. A. et al. (1961). Launch control safety study. Bell labs.
White House (2022). Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American
People. (link).
Wiener, N. (1960). Some moral and technical consequences of automation. Science 131 (3410), 1355â1358. JSTOR. (link).
Yampolskiy, R. V. (2015). Taxonomy of pathways to dangerous ai. arXiv preprint arXiv:1511.03246 .
|
2306.06924#67
|
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
|
While several recent works have identified societal-scale and
extinction-level risks to humanity arising from artificial intelligence, few
have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive
taxonomies are possible, and some are useful -- particularly if they reveal new
risks or practical approaches to safety. This paper explores a taxonomy based
on accountability: whose actions lead to the risk, are the actors unified, and
are they deliberate? We also provide stories to illustrate how the various risk
types could each play out, including risks arising from unanticipated
interactions of many AI systems, as well as risks from deliberate misuse, for
which combined technical and policy solutions are indicated.
|
http://arxiv.org/pdf/2306.06924
|
Andrew Critch, Stuart Russell
|
cs.AI, cs.CR, cs.CY, cs.LG, 68T01, I.2.0
| null | null |
cs.AI
|
20230612
|
20230614
|
[
{
"id": "1903.08542"
},
{
"id": "1606.06565"
},
{
"id": "1903.03877"
},
{
"id": "1908.01755"
},
{
"id": "1711.00363"
},
{
"id": "1701.01302"
},
{
"id": "1705.10720"
},
{
"id": "1511.03246"
},
{
"id": "1902.04198"
},
{
"id": "1806.01186"
}
] |
2306.07209
| 67 |
about all public funds. [stock_task]: for extracting and processing tasks about all stock prices, index information, company financials, etc., [economic_task]: for extracting and processing tasks about all Chinese macroeconomic and monetary policies, as well as querying companies and northbound funds, [visualization_task]: for drawing one or more K-line charts, trend charts, or outputting statistical results. Use the following format to output task 1={%s: %s},task2={%s: Ys}. Demonstration 1: ###Instruction: Please help me plot the closing price of Guizhou Maotai from Jan 23, 2018 to Mar 13, 2019 ###Plan: task 1={\âstock_task\â:\â Get the time series data of Guizhou Maotaiâs stock price from 20180123 to 20190313"}, task2={"visualization_task":" Plot the stock price of Guizhou Maotai from 20180123 to 20190313 ona line chart "} Planning prompt: Please use the given interface(function) to complete the Instruction step by step, at each step you can only choose one or more interface from the following interface library without dependencies, and generate the corresponding arguments for the interface, the
|
2306.07209#67
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 68 |
step by step, at each step you can only choose one or more interface from the following interface library without dependencies, and generate the corresponding arguments for the interface, the arguments format should be strictly in accordance with the interface description. The interface in the later steps can use results generated by previous interfaces. Please generate as json format for each step:step1={
\âarg1": [arg l,arg2...],
\âfunction1": \â%s",
âoutputl\â: \â%s\â",
\âdescription1": \â%s"
}, step2={
\âarg]\â: [arg] ,arg2..],
\ââfunction1\â: \â%s\â,
âoutputl\â: \â%s",
\âdeseription1": \â%s"
}, ending with ####. Demonstration 1: ###Instruction: Get the time series data of Guizhou Maotai's daily closing price from Jan 23, 2018 to Mar 13, 2019 ####Function Call: step1={
\âarg1\â: [\âGuizhou
|
2306.07209#68
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 69 |
daily closing price from Jan 23, 2018 to Mar 13, 2019 ####Function Call: step1={
\âarg1\â: [\âGuizhou Maotai\â,\*20180123\â,\*20190313\â,\ daily"],
\âfunction1\â: \âget_stock_prices_data\â,
\âoutput1\â: \âresult1\â,
\âdescription1\â: \ââGuizhou Maotai daily stock price (open, close, high, low, etc.) time series data\â,
â
}, step2={
"arg1": ["result1","close"],
"function1": "calculate_stock_index",
"output1": "result2",
" description 1": "Guizhou Maotai's closing price time series data"
}, step3={
"arg": ["result2", null, "line", " Guizhou Maotai closing price chart from January 23, 2018 to March 13, 2019 "],
"function": "plot_stock_data",
"output": "result1",
"description": "
|
2306.07209#69
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 70 |
chart from January 23, 2018 to March 13, 2019 "],
"function": "plot_stock_data",
"output": "result1",
"description": " Guizhou Maotai Stock Price Line Chart "
} ### As Json Format: pores? step1={ "arg": ["result2", null, âlineâ, "Guizhou Maotai closing price chart from| January 23, 2018, to March 13, 2019"], step? ={ "arg": ["resultt", "close"], âargl":["Guizhou Maotai", "20, âfunction1â: "get_stock_price: "output": "resultt", "description1": "Guizhou Mao} low, etc.) time series data" Be "function": "calculate_stock ini âoutputâ: "result2", "function": "plot_stock_data", output" "resultt", I"description": "Guizhou Maotai Stock Price Line Chart"
|
2306.07209#70
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 71 |
As Json Format: pores? step1={ "arg": ["result2", null, âlineâ, "Guizhou Maotai closing price chart from| January 23, 2018, to March 13, 2019"], step? ={ "arg": ["resultt", "close"], âargl":["Guizhou Maotai", "20, âfunction1â: "get_stock_price: "output": "resultt", "description1": "Guizhou Mao} low, etc.) time series data" Be "function": "calculate_stock ini âoutputâ: "result2", "function": "plot_stock_data", output" "resultt", I"description": "Guizhou Maotai Stock Price Line Chart"
Figure 11: Prompt and Demonstration design for interface dispatch stage.
17
|
2306.07209#71
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 72 |
Hello Data-Copilot ! © âA powerful Al system connects humans and data. A. User Input Panel The current version only supports Chinese financial data, in the future we will support for other country data Submit It is recommended to use the Openai paid API or Azure-OpenAl service, because the free Openai API will be limited by the access speed and 3 Requests per minute. what do you want to find? Start SESE Fibs onaa RAVI SALI RS aS (<) Try these examples SARS Query stock: BAF Query Economy: #5485] Query Company: BEF Query Fund: . . S#âS8 Lilkso . BUA ARABS Summary and Result: Solving Step: B Text Output Panell RBBSHAARD RAT ES, BRBâMES [stock_task] Fe#a AIRE L UESOPTS AL ARH â FIR 20230331 A AAR SA TE JAR LiESonvAT A Be ARM â 4K (20234F03 31 A) WM SisMAS SA SUSI (netprofit_yoy MAR, HFS AAR AE He REN ZRIEJLL (netprofit_yoy) #H Hasonanoenâsnoomneenaeantesae C. Graph and Table Panel rere) Top-10 Last-10 200 stock_name BEE
|
2306.07209#72
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 73 |
#H Hasonanoenâsnoomneenaeantesae C. Graph and Table Panel rere) Top-10 Last-10 200 stock_name BEE -24.58 noms RAM vs stock_name AweL -40.83 stocknane RES FG stock_name MAGE 48.2 60 stock_name eaat 49.55 stockname RR eis fener cporRR AMD stock_name aatew -$1.11 stock_name ei 46.49 stock_name ne asia 100 stock_name mes 28.68 CEST heddalid HS stock_nane pee 36.55 stock name BRB â77.81 stock_name peas) 35.95 stock_name HBG 78.13 â | | âstock_name men 32.77 stock_name RE 89.34 LTT HAAAAAHITT a âv0 OOO gF oH gt OP OM HO HP gb BOM AI PO MM B® OS BS BE BA LB SY GEE OE PELE BSE POE OEE OF PO BOSS LOZ CE SEE. OS, 28, Ps CR OK A A OK OR A ESPEESES PEELE RPE E EE? CECE SEP SEL EY CEE EP ESERIES IEE EO ESE
|
2306.07209#73
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 74 |
Figure 12: The user interface of our system. The green box (A) is the user input panel, and the purple (B) and red parts (C) are the results returned by the system. 18
# References
[1] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. In NeurIPS, 2020.
[2] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, and others. Palm: Scaling language modeling with pathways. ArXiv, abs/2204.02311, 2022.
|
2306.07209#74
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 75 |
[3] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open Pre-trained Transformer Language Models. ArXiv, abs/2205.01068, 2022.
[4] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. Glm-130b: An Open Bilingual Pre-trained Model. ICLR 2023 poster, 2023.
|
2306.07209#75
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 76 |
[5] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurâelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and Efficient Foundation Language Models. ArXiv, abs/2302.13971, 2023.
[6] OpenAI. Chatgpt. 2022.
[7] OpenAI. Gpt-4 technical report. 2023.
[8] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain of Thought Prompting Elicits Reasoning in Large Language Models. In Conference on Neural Information Processing Systems (NeurIPS), 2022.
[9] Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. In Conference on Neural Information Large Language Models are Zero-Shot Reasoners. Processing Systems (NeurIPS), 2022.
|
2306.07209#76
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 77 |
[10] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided Language Models. ArXiv, abs/2211.10435, 2022.
[11] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR 2023 poster, abs/2203.11171, 2023.
[12] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. CoRR, abs/2203.02155, 2022.
|
2306.07209#77
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 78 |
[13] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Gary Lai, Ishan Virendrabhai Purohit, Ishani Mondal, Jacob William Anderson, Kirby C. Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, rushang karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, Xudong Shen, Chitta Baral, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi, and Daniel Khashabi. Super-NaturalInstructions:
19
|
2306.07209#78
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 79 |
19
Generalization via Declarative Instructions on 1600+ NLP Tasks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2022.
[14] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022.
[15] S. Iyer, Xiaojuan Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian OâHoro, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, and Veselin Stoyanov. Opt- IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization. ArXiv, abs/2212.12017, 2022.
|
2306.07209#79
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 80 |
[16] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scaling instruction-finetuned language models. CoRR, abs/2210.11416, 2022.
[17] Victor Dibia. Lida: A tool for automatic generation of grammar-agnostic visualizations and infographics using large language models. arXiv preprint arXiv:2303.02927, 2023.
[18] Liying Cheng, Xingxuan Li, and Lidong Bing. Is gpt-4 a good data analyst? arXiv preprint arXiv:2305.15038, 2023.
|
2306.07209#80
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 81 |
[19] Hongxin Li, Jingran Su, Yuntao Chen, Qing Li, and Zhaoxiang Zhang. Sheetcopilot: Bring- ing software productivity to the next level through large language models. arXiv preprint arXiv:2305.19308, 2023.
[20] Chenfei Wu, Sheng-Kai Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models. arXiv, 2023.
[21] Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. Audiogpt: Understanding and generating speech, music, sound, and talking head. arXiv preprint arXiv:2304.12995, 2023.
[22] Peter Naur. Concise survey of computer methods. 1974.
[23] OpenAI. Gpt-4 technical report, 2023.
|
2306.07209#81
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 82 |
[22] Peter Naur. Concise survey of computer methods. 1974.
[23] OpenAI. Gpt-4 technical report, 2023.
[24] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. CoRR, abs/2206.07682, 2022.
[25] Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance. arXiv preprint arXiv:2303.17564, 2023.
[26] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions, 2022.
|
2306.07209#82
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 83 |
[27] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, M. Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language Models Can Teach Themselves to Use Tools. ArXiv, abs/2302.04761, 2023.
[28] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. ArXiv, abs/2211.10435, 2022.
20
|
2306.07209#83
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 84 |
20
[29] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. Tool learning with foundation models, 2023.
[30] Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. ArXiv, abs/2305.11554, 2023.
[31] DÃdac SurÃs, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for reasoning, 2023.
|
2306.07209#84
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.07209
| 85 |
[31] DÃdac SurÃs, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for reasoning, 2023.
[32] Yongliang Shen, Kaitao Song, Xu Tan, Dong Sheng Li, Weiming Lu, and Yue Ting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. ArXiv, abs/2303.17580, 2023.
[33] Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, and Nan Duan. Taskmatrix.ai: Completing tasks by connecting foundation models with millions of apis, 2023.
[34] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023.
|
2306.07209#85
|
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow
|
Various industries such as finance, meteorology, and energy generate vast
amounts of heterogeneous data every day. There is a natural demand for humans
to manage, process, and display data efficiently. However, it necessitates
labor-intensive efforts and a high level of expertise for these data-related
tasks. Considering that large language models (LLMs) have showcased promising
capabilities in semantic understanding and reasoning, we advocate that the
deployment of LLMs could autonomously manage and process massive amounts of
data while displaying and interacting in a human-friendly manner. Based on this
belief, we propose Data-Copilot, an LLM-based system that connects numerous
data sources on one end and caters to diverse human demands on the other end.
Acting like an experienced expert, Data-Copilot autonomously transforms raw
data into visualization results that best match the user's intent.
Specifically, Data-Copilot autonomously designs versatile interfaces (tools)
for data management, processing, prediction, and visualization. In real-time
response, it automatically deploys a concise workflow by invoking corresponding
interfaces step by step for the user's request. The interface design and
deployment processes are fully controlled by Data-Copilot itself, without human
assistance. Besides, we create a Data-Copilot demo that links abundant data
from different domains (stock, fund, company, economics, and live news) and
accurately respond to diverse requests, serving as a reliable AI assistant.
|
http://arxiv.org/pdf/2306.07209
|
Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
|
cs.CL, cs.AI, cs.CE
| null | null |
cs.CL
|
20230612
|
20230612
|
[
{
"id": "2305.14318"
},
{
"id": "2303.17564"
},
{
"id": "2304.12995"
},
{
"id": "2305.19308"
},
{
"id": "2305.17126"
},
{
"id": "2305.15038"
},
{
"id": "2303.02927"
}
] |
2306.06624
| 0 |
3 2 0 2
g u A 7 2 ] L C . s c [
2 v 4 2 6 6 0 . 6 0 3 2 : v i X r a
# RestGPT: Connecting Large Language Models with Real-World RESTful APIs
Yifan Song1, Weimin Xiong1, Dawei Zhu1, Wenhao Wu1, Han Qian2, Mingbo Song2 Hailiang Huang2, Cheng Li3, Ke Wang3, Rong Yao3, Ye Tian3, Sujian Li1â 1School of Computer Science, Peking University 2School of Electronics Engineering and Computer Science, Peking University 3Huawei Technologies {yfsong, lisujian}@pku.edu.cn https://restgpt.github.io
# Abstract
|
2306.06624#0
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06624
| 1 |
# Abstract
Tool-augmented large language models (LLMs) have achieved remarkable progress in tackling a broad range of tasks. However, existing methods are mainly restricted to specifically designed tools and fail to fulfill complex instructions, having great limitations when confronted with real-world scenarios. In this paper, we explore a more realistic scenario by connecting LLMs with RESTful APIs, which adhere to the widely adopted REST software architectural style for web service development. To address the practical challenges of tackling complex instructions, we propose RestGPT, which exploits the power of LLMs and conducts a coarse-to-fine on- line planning mechanism to enhance the abilities of task decomposition and API selection. RestGPT also contains an API executor tailored for calling RESTful APIs, which can meticulously formulate parameters and parse API responses. To fully evaluate the performance of RestGPT, we propose RestBench, a high-quality benchmark which consists of two real-world scenarios and human-annotated in- structions with gold solution paths. Experiments show that RestGPT is able to achieve impressive results in complex tasks and has strong robustness, which paves a new way towards AGI.
# Introduction
|
2306.06624#1
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 1 |
# Abstract
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situ- ationally grounded knowledge for an embodied agent learn- ing novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, em- bodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to eval- uate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77 â 94% task com- pletion in one-shot learning without user oversight. The ap- proach achieves 100% task completion when human over- sight (such as an indication of preference) is provided. Fur- ther, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
# 1 Introduction
|
2306.06770#1
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 2 |
# Introduction
Large language models (LLMs), such as GPT-3 [1] and ChatGPT [2], have shown various emergent abilities, including in-context learning [1, 3], reasoning [4, 5], and step-by-step planning [6, 7]. In pursuit of advancing the capabilities of LLMs for practical applications, an ongoing research direction is investigating the incorporation of external tools/APIs to enhance the functionality of LLMs[8, 9, 10, 11]. This endeavor has yielded successful integration of diverse tools, including search engines and other foundational models, with LLMs[12, 13, 14].
|
2306.06624#2
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 2 |
# 1 Introduction
Prompt engineering (Reynolds and McDonell 2021), along with in-context learning (OpenAI 2023), has been shown to be an effective strategy for extracting knowledge from a large language model (LLM). However, embodied agents learning task knowledge (e.g., goals and actions) face far more stringent requirements. LLM responses must be:
1. Interpretable by the agentâs parsing capabilities. LLM re- sponses must be understandable by the agent, meaning grammar and terminology are presented in a form that the agent can actually process.
2. Situated to the agentâs environment. Objects, features, and relations referenced in an LLM response must be per- ceivable and identifiable in the environment for the agent to ground the response successfully.
3. Matched to agentâs embodiment and affordances. An LLM, trained on a large corpus describing human activi- ties, will (generally) generate responses conforming with human embodiment and affordances. Responses that do
not consider an agentâs often non-human embodiment (e.g., a single-armed robot) will often be infeasible for that agent to execute.
|
2306.06770#2
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 3 |
Despite significant progresses, we find that existing API-augmented LLMs are still in the experimental stage and have yet to fully meet the demands of real-world user instructions. As shown in Table 1, current methods are limited to connect with a small number of specially designed tools/APIs [11, 12, 15]. For example, Chameleon [12] designs a set of 15 tools, such as table verbalizer and image captioner. Additionally, the absence of a standardized API design specification obstructs the scalability of previous endeavors. Thus, the potential for connecting LLMs with a diverse range of real-world APIs, like RESTful APIs, remains under-explored and challenging. Furthermore, when dealing with a complex instruction in real scenario, it is necessary to decompose it into smaller sub- tasks and accomplish them by employing a mix of various APIs. As a result, it becomes essential for
*Corresponding author.
|
2306.06624#3
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 3 |
not consider an agentâs often non-human embodiment (e.g., a single-armed robot) will often be infeasible for that agent to execute.
4. Aligned with individual human preferences and values. Users will have individual expectations about how tasks should be performed and what constitutes appropriate outcomes in the current situation. Task success requires identifying and conforming to these preferences. The first three requirements are necessary for an embod- ied agent to use an LLM response to act in its world. We define responses that meet these requirements as viable. The final requirement is necessary to achieve the task as a spe- cific human user prefers. A response is situationally relevant if it is viable and matches the userâs preferences.
|
2306.06770#3
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 4 |
*Corresponding author.
Model API/Tool Use Framework ReAct Toolformer Visual ChatGPT ViperGPT HuggingGPT API-Bank Chameleon Gorilla GPT4Tools Num. 3 5 22 11 241 53 15 1645â 31 Extensibility â â â â + â â + â Schema Specialized Specialized Specialized Python func. HuggingFace Specialized Specialized JSON Specialized Planning Online â â Offline Offline â Offline â â Planning Form Feedback Natural Lang. - - Program Natural Lang. - Natural Lang. - - â â Human â â Human â â Human â Plug-n-Play â â â â â â â â â RestGPT (ours) 100+ ++ RESTful Online Coarse-to-Fine â
Table 1: A comparison of work that augments LLMs with API/tool usage. â denotes API selection with retrieval.
API-augmented LLMs to have robust planning and decision-making capabilities to effectively tackle real-world tasks [9]. Nonetheless, existing techniques, either offline introspective plan-then-execute methods [9, 13, 12] or the ReAct framework [16], encounter challenges in effectively adapting API feedback and generating viable plans.
|
2306.06624#4
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 4 |
To attempt to elicit viable responses from the LLM, we previously (Kirk et al. 2023) employed a template-based prompting approach (TBP; Olmo, Sreedharan, and Kamb- hampati 2021; Kirk et al. 2022; Reynolds and McDonell 2021). We developed prompt templates that included ex- amples of desired task knowledge, instantiated it with con- text from the current task, and retrieved multiple responses (varying the LLM temperature parameter to generate differ- ent responses). Unfortunately, this TBP strategy produced responses that often violated one or more of the first three requirements. Human feedback could be used to overcome these limitations, but required substantial input to correct responses (as well as to align them with agent needs and user preferences), making TBP impractical for an embodied agent.
|
2306.06770#4
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 5 |
In this work, we delve into a more realistic scenario by connecting LLMs with real-world RESTful APIs, aiming at fulfilling practical user instructions. RESTful is the de facto standard for web service development [17], which utilizes HTTP methods (e.g., GET, POST) and URIs to manipulate resources. RESTful API development typically adheres to the OpenAPI Specification (OAS) [18], which describes the operations, parameters, and response schemas of each API endpoint. Therefore, our resulting framework can connect with any RESTful application and offer standardized API development processes, thereby enabling enhanced extensibility compared to previous approaches. However, connecting LLMs with RESTful APIs also brings practical challenges. First, calling real-world APIs may give rise to a multitude of unforeseen situations, necessitating the framework to exhibit strong robustness and conduct reasonable planning. Second, the parameters and responses of RESTful APIs often follow specific formats, leading to difficulty in API invoking and response parsing.
|
2306.06624#5
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 5 |
Motivated by these inadequacies, we present a novel strategy: Search Tree, Analyze and Repair, and Selection (STARS). Similar to âagenticâ uses of LLMs (Nakajima 2023; Significant Gravitas 2023), we employ the LLM as a component within a larger system. Like self-consistency (Wang et al. 2023), STARS generates a large space of re- sponses from the LLM (multiple responses to a query or specific task). In contrast with the voting in self-consistency, the agent analyzes and evaluates each response for poten- tial issues (mismatched embodiment, unknown words, un- grounded references) and attempts to repair problematic re- sponses via targeted re-prompting of the LLM. To select among candidates, the agent queries the LLM for a âpre- ferredâ response. The agent still solicits human feedback, but with STARS, the primary role of oversight is to ensure
that the agentâs behavior (and learning) incorporates user preferences.
|
2306.06770#5
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 6 |
To tackle the limitations of previous methods and the practical challenges associated with RESTful APIs, we propose RestGPT, a LLM-based framework connecting with RESTful APIs to handle complex instructions. RestGPT comprises three main modules: a Planner, an API Selector, and an Executor. The core of each module is prompting an LLM. Unlike prior work that uses static or ReAct style planning which lacks flexibility in realistic scenarios, RestGPT adopts an iterative coarse-to-fine online planning mechanism. Given a complicated instruction, the planner generates a sub-task for current task in the format of natural language. Subsequently, the API selector maps the coarse high-level sub-task to finer API calling plan, forming a coarse-to-fine task planning. The executor, responsible for invoking RESTful APIs and get execution results, is further divided it into two sub-modules: a Caller and a response Parser. The caller organizes API call parameters based on the API plan and API documentation, while the parser utilizes the response schema defined in OAS to generate Python code to parse responses. Once receiving the execution results of the API plan, the planner performs online planning for the subsequent sub-task in the next step. Through the integration of the three modules, our method RestGPT shows superior extensibility and flexibility in mastering RESTful APIs.
|
2306.06624#6
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 6 |
that the agentâs behavior (and learning) incorporates user preferences.
To evaluate STARS against TBP, we embed both methods within an existing embodied agent (Mohan and Laird 2014; Mininger 2021; Kirk and Laird 2016). This agent uses inter- active task learning (ITL; Laird et al. 2017; Gluck and Laird 2019) to learn novel tasks via natural language instruction from a human user. Instead of querying a human for a goal description of the task (e.g., âthe goal is that the can is in the recycling binâ), the new agents (using TBP or STARS) access the LLM (repeatedly) for that goal.
We compare STARS to TBP and also evaluate the indi- vidual components of STARS (i.e., Search Tree, Analysis & Repair, Selection) in a simulated robotic environment. We assess both task completion rate and the amount of over- sight needed to achieve 100% task completion. We hypoth- esize STARS will eliminate the need to solicit human feed- back for unviable responses, resulting in a much higher task completion rate (without oversight) and reducing how much oversight is required when human input is available.
|
2306.06770#6
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 7 |
To evaluate the performance of RestGPT in utilizing RESTful APIs, we introduce RestBench, a human-annotated benchmark consisting of two realistic scenarios, TMDB movie database and Spotify music player. For each scenario, we collect diverse real-world user instructions that require the utilization of multiple APIs to complete. Based on the RestBench, we conduct comprehensive experiments to investigate the performance of RestGPT across different dimensions. The experimental results demonstrate that RestGPT exhibits robust capabilities in handling complex user instructions and has significant advantages in task planning, API understanding, and response parsing.
1HuggingGPT [13] claims it has integrated hundreds of models on HuggingFace. However, all of the models only cover 24 tasks such as text classification, object detection, etc.
2
Our contributions can be summarized as follows:
1. For the first time, we attempt to connect large language models with RESTful APIs, enabling the resulting framework to be compatible with existing real-world applications while also providing powerful extensibility.
2. We propose RestGPT, a coarse-to-fine online planning framework that effectively handles the practical challenges associated with connecting LLMs with RESTful APIs, including API understanding, planning, and API response parsing.
|
2306.06624#7
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 7 |
As we show below, over three different tasks, STARS achieves 77-94% task completion without oversight (in com- parison to 35-66% with TBP). With oversight, STARS re- duces the number of words needed from the user by 52-68% (compared to TBP). Further, providing oversight is much simpler for the user. The user no longer needs to evaluate the viability of responses nor to provide (many) goal descrip- tions; now, the user simply confirms or disconfirms the LLM responses that the agent has determined to be viable. Finally, because the original ITL agent learns long-term task and subtask knowledge in one shot, this new agent also demon- strates one-shot performance: it achieves 100% task com- pletion when prompted to perform the task in the future and without accessing the LLM or requiring further human in- put.
2 Related Work Core features of our approach are 1) online task learning (no pre-training for domain or task), 2) the exploitation of multiple sources of knowledge, 3) proactive evaluation of LLM responses, and 4) one-shot task learning. We review related work in terms of these solution features.
|
2306.06770#7
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 8 |
3. To evaluate the performance of RestGPT, we build a human-annotated benchmark, Rest- Bench, which comprises two practical scenarios. Experimental results show the capability of RestGPT to effectively utilize a number of RESTful APIs to accomplish complex instruc- tions.
# 2 Background
# 2.1 Tool-Augmented Language Models
The emergence of recent powerful LLMs has enabled artificial intelligence systems to match hu- man skills in utilizing tools [8, 9]. To enhance the performance of LLMs in accessing up-to-date information and carrying out precise mathematical reasoning, early work leverages simple tools like web search engines and calculators, such as ReAct [16], Toolformer [11], and ART [19]. Another line of research has focused on equipping LLMs to coordinate with external models for complex AI tasks, exemplified by HuggingGPT [13], ViperGPT [20], Visual ChatGPT [14] and Chameleon [12]. Recently, some work study how to enable open-sourced LLMs, such as LLaMa, to perform API usage [21, 15, 22]. Additionally, API-Bank [23] provides a systematic benchmark to showcase the efficacy of LLMs using tools to respond to human instructions.
|
2306.06624#8
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 8 |
Inner Monologue (Huang et al. 2022) modifies its prompts based on feedback from the environment, agent, and user to elicit new responses when an action fails. Repair focuses on a single response at a time; STARS analyzes a set of responses to determine the result of using them, making evaluations and repairs before any response is selected and used. Logeswaran et al. (2022) plan sequences of subgoals from multiple LLM responses obtained from beam search (as in STARS) that does re-ranking based on feedback from the environment. SayCan (Ahn et al. 2022) uses an LLM and a trained set of low-level robot skills with short lan- guage descriptions for objects. The LLM is prompted multi- ple times for a high-level task to retrieve one low-level step at a time until a complete plan is found. To obtain knowledge of low-level tasks, SayCan is trained on over 68K teleoper- ated demonstrations and human-rated simulations. STARS encodes properties for object classes (e.g., whether an object
can be âgrabbedâ by the robot) but requires no pre-training or prior exposure to the domain.
|
2306.06770#8
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 9 |
Despite the notable advancements in incorporating tools for large language models, previous methods have exhibited certain limitations, most notably their restricted support for a limited number of specially designed APIs [12] and their inferior planning methods [9, 24, 12]. We compare RestGPT with other tool-augmented language models in Table 1. As shown, our work stands out by supporting for over 100 RESTful APIs. Furthermore, compared with most previous approaches adopt static offline planning which cannot interact with APIs and utilize feedback to adjust the plan, we employ a coarse-to-fine online planning framework with feedback, facilitating more flexible planning for complex instructions. Our work shares the similar spirit of AutoGPT, an autonomous agent capable of accomplishing complex tasks with numerous tools. While AutoGPT relies on developers to ensure compatibility with various applications, RestGPT can be integrated with any RESTful API-based applications in a plug-and-play fashion.
# 2.2 RESTful APIs
|
2306.06624#9
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 9 |
can be âgrabbedâ by the robot) but requires no pre-training or prior exposure to the domain.
TidyBot (Wu et al. 2023) and TIDEE (Sarch et al. 2022) address robotic problems similar to one of our experimen- tal tasks (tidying a kitchen). They also account for human preferences. TidyBot tries to elicit human preferences by having the LLM summarize a few answers given by a hu- man. TIDEE attempts to learn preferences by using âcom- monsense priorsâ learned previously by performing tasks in a âtraining house.â STARS does not depend on pre-training, but does elicit human preferences via NL dialogues.
PROGPROMPT (Singh et al. 2022) produces task plans by prompting an LLM with Python code that specifies the action primitives, objects, example tasks, and task name. The LLM returns a task plan in Python which includes asser- tions about states of the environment that are checked during execution, and recovery steps if an assertion fails. STARS re- trieves NL descriptions of goals, rather than plans, and eval- uates goals before they are used.
|
2306.06770#9
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 10 |
# 2.2 RESTful APIs
RESTful APIs have become a popular way to expose functionalities and data of web services to client applications [25, 17]. RESTful APIs also provide a standard for integrating external systems together with using a simple yet powerful interface. There are millions of RESTful APIs available on Internet, such as Spotify, Twitter, Gmail, etc. RESTful APIs are based on the REST architectural style, which emphasizes a client-server communication via stateless HTTP requests, including GET, POST, etc, where resources are identified by self-descriptive URIs [25]. The response of RESTful APIs are always structured in JSON format and contain various information. Thus, LLMs connected with RESTful APIs must possess a strong ability to extract the required information from the response.
OpenAPI Specification (OAS, or Swagger) [18], has been widely adopted as a standard for defining RESTful APIs. OAS is a structured documentation file which describes the endpoints, operations, parameters, response schemas, and other details of an API endpoint, providing a clear interface for our method to use the APIs.
3
Executor Program Interpreter Execution Result User Inst. âââ> Planner mm NL Plan, Continue, or End v API Plan â . ' H ' OpenAPI Response ' ' Specification SEUDesc: ANAL Dyer. Schema | â â
|
2306.06624#10
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 10 |
STARS attempts to verify LLM responses before attempt- ing to achieve the goal indicated by a response. There are many approaches to verification of LLM knowledge, includ- ing 1) response sampling (Wang et al. 2023), 2) use of other sources of knowledge such as planning (Valmeekam et al. 2023) or an LLM (Kim, Baldi, and McAleer 2023), and 3) human feedback/annotation (TidyBot). Recursively Crit- icizes and Improves (RCI; Kim, Baldi, and McAleer 2023) verifies LLM output by prompting the LLM again to iden- tify (potential) issues. Cobbe et al. (2021) train a verifier to rank responses, while self-consistency (Wang et al. 2023) uses voting to select an answer. Diao et al. (2023) combine all three of the above verification strategies by eliciting re- sponses from an LLM, ranking them using an uncertainty metric (a source of knowledge other than the LLM), and then having humans annotate responses for further exploration.
|
2306.06770#10
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 11 |
Figure 1: Overview of RestGPT. The planner, API selector, executor collaborate to form the coarse- to-fine online planning framework. The caller and response parser in the executor provides robust execution of the RESTful API calling plan.
# 3 RestGPT
# 3.1 RestGPT Architecture
As demonstrated in Figure 1, RestGPT is composed of three main modules: a Planner P, an API Selector S and an Executor E. The planner decomposes each user instruction into several sub-tasks, while the API selector selects APIs to address each sub-task. The executor, consisting of a Caller and a response Parser, performs RESTful API calls and extracts useful information from the JSON response to form the execution result. The core of each component is an LLM with the corresponding prompt and in-context examples describing the function of the component.
|
2306.06624#11
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 11 |
While these efforts address similar challenges (or aspects of them), a unique aspect of STARS is the proactive analysis of many responses retrieved via prompting an LLM through embodied reasoning. The analysis enables the identification of known problems and targeted repairs. Another unique as- pect is that STARS learns goal states for tasks, rather than action sequences to achieve the tasks. The STARS agent also learns task knowledge in one shot, during performance, without prior training. When confronted with the same or similar tasks in the future, the agent can efficiently execute the task without the use of the LLM (or STARS). Encoding persistent task knowledge contrasts with in-context learning (OpenAI 2023).
# 3 Prior Baseline: Template-based Prompting
The agent employs template-based prompting (TBP) to elicit responses from the LLM. Templates enable the agent to con- struct prompts using context from the task and environment and introduce prompt examples matched to the agentâs ca- pabilities and embodiment. Figure 1 outlines the baseline template-based prompting approach for generating task-goal descriptions (i.e., it replaces the NL-dialogue for âGet goal descriptionâ in Figure 4). A prompt template is chosen and
ooâ oversight / preferences wens available) Choose & instantiate Choose mY Set Robot Prompt Template = Response(s) Goal | or |
|
2306.06770#11
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 12 |
One of the challenges in connecting LLMs with a vast number of APIs is to ensure that the framework is able to fully understand the API documents with a limited context window size of LLMs. As de- picted in Figure 1, we designate different modules to read distinct parts of the OpenAPI Specification (OAS). This strategy allows us to leverage OAS information to its fullest potentials when working with RESTful APIs. Specifically, the API selector reads the endpoint descriptions of all APIs to select a proper API for solving the current sub-task. Then, the caller uses the detailed documents of the API within the API plan to generate the correct API calling parameters and request body. Lastly, the parser is developed to make use of the response schema within OAS to generate the parsing code for information extraction.
# 3.2 Coarse-to-fine Online Planning
To fully exploit the planning and decision making capabilities of LLMs and enable our method to dynamically adjust the plan to changing circumstances when accomplishing real-world user instructions, we propose a coarse-to-fine online planning mechanism in RestGPT.
|
2306.06624#12
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 12 |
ooâ oversight / preferences wens available) Choose & instantiate Choose mY Set Robot Prompt Template = Response(s) Goal | or |
Figure 1: Baseline approach to elicitation of goal descrip- tions via template-based prompting (TBP).
instantiated with relevant context, the LLM is queried (po- tentially soliciting multiple responses with varying tempera- tures), and response(s) are chosen for execution. In this base- line approach, choices are ranked by the mean log probabili- ties of tokens in each response. Oversight is used to select an LLM response or to give a goal description when all LLM-generated choices are unacceptable. The agent uses the chosen response to attempt to perform the task and, if successful, learns a policy to execute the task in the future (see Figure 4). Few-shot examples in the prompt bias the LLM toward responses that are viable and relevant, match- ing the agentâs NLP capabilities, desired semantic content (e.g., simple goal statements), and embodiment limitations (Kirk et al. 2022). This baseline approach learns the task in one shot but requires substantial user oversight to overcome errors (Kirk et al. 2023).
# 4 The STARS Approach
|
2306.06770#12
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 13 |
The workflow of RestGPT can be characterized as an iterative âplan and executionâ loop. During the planning stage, the planner and API selector collaborate to accomplish an instruction through itera- tively decomposing it into suitable natural language sub-tasks and corresponding APIs. In each step t, the planner P leverages commonsense knowledge to generate a natural language (NL) sub-task pt based on the user instruction q, previous NL plans (p1, ..., ptâ1), and execution results (r1, ..., rtâ1), thereby constructing a high-level NL plan. Then, the API selector S reads the descriptions of available API endpoints to select appropriate APIs and construct the finer API plan at, which may contain a single or multiple API calls to solve the current NL plan pt. Then the executor E executes the API plan at and gets the execution result rt for current step. This process can be formulated as:
NL Plan: pt â P(q; p1, r1..., ptâ1, rtâ1), API Plan: at â S(pt; r1, ..., rtâ1), Exec. Res.: rt â E(at; r1, ..., rtâ1). (1)
4
|
2306.06624#13
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 13 |
# 4 The STARS Approach
STARS extends and supplements the TBP baseline with three processes to address the limitations of the baseline: retrieving a tree of LLM responses via beam search (ST: Search Tree), analyzing and repairing responses (AR: Anal- ysis and Repair), and using the LLM to select a goal re- sponse from the candidates (S: Selection). After presenting each of these components of STARS, we describe the over- sight strategy of soliciting user feedback.
Figure 2 outlines the process of the STARS approach (blue boxes are re-purposed elements from TBP; green boxes are new components of STARS). With STARS, the agent retrieves goal descriptions from the LLM (the rest of the task-learning process is the same). STARS ensures that the goal descriptions it retrieves from the LLM are viable for the agent. Acquiring goal knowledge is crucial to learn- ing novel tasks, enabling an agent with planning capabilities to perform the new task. Goal-state learning enables greater flexibility than learning a sequence of actions because goal- state knowledge can transfer to other situations that require different action sequences to achieve the same goal.
# 4.1 Search Tree (ST)
|
2306.06770#13
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 14 |
4
In this way, the planner and API selector are dedicated to NL sub-task planning and API selec- tion, respectively, effectively utilizing the large language modelâs abilities of planning and text comprehension.
Alongside the âplan and executionâ loop, we design two special states, âcontinualâ and âendâ, for the planner to monitor the execution result from the executor. Specifically, if the planner finds that the current executorâs output rt has not completed the present NL sub-task pt, it will output a âcontinueâ signal and provide a special NL plan pt+1 to the API selector, instructing it to continue fulfilling the plan pt. In such cases, the API selector will re-generate a new API plan based on the original NL plan pt, new NL plan pt+1, previous API plan at and execution result rt. This process is described as:
API Plan: at+1 â S(pt, pt+1; r1, ..., rtâ1; at, rt), Exec. Res.: rt+1 â E(at+1; r1, ..., rtâ1, rt). (2)
|
2306.06624#14
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 14 |
# 4.1 Search Tree (ST)
In prior work with TBP (Figure 1), we used higher temper- ature parameters to retrieve multiple responses for the same prompt. This approach resulted in many duplicate responses and more responses that were not viable, deviating from targeted content and form. Similar to others (Logeswaran et al. 2022; Wang et al. 2023), here we enable the agent to use a beam-search strategy to generate a breadth of high- probability responses from a single prompt.
Qo ses oat ease Se -{ Pea âealyze sect | -{ Prompt âTemplate | Retrieval Response(s) Repair Response(s) Soa â | oa) [aa ony uM uM âChoose/nst. d 5 % Tonga
Figure 2: Agent approach for Search Tree, Analyze, Repair, and Selection (STARS)
|
2306.06770#14
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 15 |
If the planner assesses that the userâs request has been completed, it will give the termination signal âendâ and output the final result. With such a design, our method achieves a more flexible online planning which is capable of handling various situations encountered in real-world scenarios.
The planner, API selector, and executor collaborate to form RestGPTâs coarse-to-fine online plan- ning framework. This framework significantly enhances the ability to decompose tasks and select appropriate APIs, providing the model with the flexibility to effectively tackle user instructions.
# 3.3 API Plan Execution
Once an API calling plan is generated, the next step is to execute it. The executor E consists of a caller and a response parser. The caller should read the API documents carefully and generate correct parameters or request body for the API call. Due to the constraints of maximum con- text length, we filter API documents and only preserve APIs appearing in current API plan at. Given the generated parameters and request body, we use Requests Python library to call the REST- ful API. Besides, to guide the response parser to extract information from the API response, the caller also generates a response description and output instruction for the response parser. Figure 2 presents an example output of the caller.
API Plan: GET /movie/{movie_id} to get the title of the movie
|
2306.06624#15
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 15 |
Figure 2: Agent approach for Search Tree, Analyze, Repair, and Selection (STARS)
4.2 Analyze and Repair (AR) While many of the responses retrieved from the LLM are reasonable, they often fail to meet other requirements: be- ing matched to the agentâs embodiment, language capabil- ities, and situation. An agent that attempts to use such a mismatched response will fail to achieve the task. Analysis and Repair detects and categorizes mismatches, drawing on the cognitive agentâs knowledge and capabilities to identify problems, and then attempts to repair responses with identi- fiable mismatches.
|
2306.06770#15
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 16 |
API Plan: GET /movie/{movie_id} to get the title of the movie
rt Caller + API doc. for GET /movie/{movie_id} t Method: GET URL: https://api.themoviedb.org/3/movie/ {movie_id} Parameters: movie_id=843 Description: The response is details of a movie with id 843 Output Instruction: What is the title of the movie?
Figure 2: Example output of the caller.
RESTful APIs typically return a JSON formatted response with much redundant information. The executor needs to extract the required information from the response and return it to the planner. However, the response may sometimes have a complex structure or be lengthy, making it difficult to extract important information via directly prompting the LLMs. To address this problem, we make use of the response schema defined in the OAS. Specifically, we utilize the coding capability of LLM to generate Python parsing code based on the provided schema and output instructions generated by the caller. Next, the Python code is executed to get the final result. If there are no execution exceptions or errors, the output is returned. Otherwise, the LLM is prompted to parse the response directly as a backup.
# 4 RestBench
|
2306.06624#16
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 16 |
The overall process for Analysis and Repair is illustrated in Figure 3. The agent performs a mental simulation of what would happen if it attempted to use a response from the LLM, using the same knowledge of parsing and grounding it uses when performing the task. The analysis includes a linguistic evaluation (orange: whether the agent can parse and interpret the language and terms), a grounding anal- ysis (green: whether each referent in the response can be grounded to an object observable in the environment), and an analysis of affordances (blue: whether the agent can achieve the actions on objects implied by clauses in the goal response). The âARâ process currently addresses these three sources of mismatch (future work can extend repair to addi- tional sources of mismatch): ⢠Language: To assess potential interpretability issues, the agent parses the response with its native NLP capabil- ities and examines the output. The language processor indicates if a sentence can be interpreted and identifies unknown words.
⢠Situation: To detect grounding issues, the agent evalu- ates the results of its language comprehension process. When a sentence contains a referring expression to an object, such as a cabinet, the agentâs language processing identifies grounding candidates observable by the agent. Failure to ground a referent indicates a mismatch with the current situation.
|
2306.06770#16
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 17 |
# 4 RestBench
To assess the effectiveness of RestGPT in processing complex user instructions through RESTful APIs, we introduce RestBench, a high-quality human annotated dataset comprising of two real-world scenarios. Existing researches have proposed several benchmarks for the evaluation of tool/API augmented LLMs [23, 21, 9]. However, these benchmarks primarily focus on simple tasks that can be accomplished using a single API. We hope RestBench can facilitate the exploration on utilizing multiple APIs to address real-world user instructions.
5
Scenario Num. APIs Len. of Solution Path Avg. Len. 1 2 3 4 Total TMDB Spotify 54 40 5 8 66 18 27 22 2 9 2.3 2.6 100 57
Table 2: Statistics of RestBench test set. We report the number of instructions with different lengths of solution path.
# 4.1 Scenarios and APIs
|
2306.06624#17
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 17 |
Embodiment and Affordance: The agent detects em- bodiment and affordance mismatches using its knowl- edge of objects (semantic memory) and properties de- tected from perception (environment). E.g., when it pro- cesses a clause in a goal response such as âthe dish rack is in the cabinet,â it evaluates if the object to be moved (âdish rackâ) has the property âgrabbable.â Repair is coupled to these diagnostic mismatches detected during analysis. For each case, to repair the mismatch, the
|
2306.06770#17
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 18 |
Table 2: Statistics of RestBench test set. We report the number of instructions with different lengths of solution path.
# 4.1 Scenarios and APIs
We select two common real-world scenarios: TMDB movie database and Spotify music player. The main consideration is to evaluate the capabilities of RestGPT: (1) augmenting LLMs with external specialized domain database via RESTful APIs; (2) connecting LLMs with RESTful APIs to autonomously control real-world applications. TMDB offers official RESTful APIs encompassing the information of movies, TVs, actors, and images. Spotify music player provides API endpoints to retrieve content metadata, receive recommendations, create and manage playlists, and control playback. For these two scenarios, we filter out 54 and 40 commonly used APIs respectively and obtain the corresponding OpenAPI Specifications to build RestBench.
# 4.2 Dataset Collection
|
2306.06624#18
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 18 |
(LLMinitialresponse: ) © ( Knowledge sources for Analysis » The goal is that the ceramic- Language @ Situation @ Affordances/ bowl is in the dish rack and the @ Languase @ ® Embodiment dish rack is in the cabinet NLP Parser, Environment â_ Environment, âSemantic grounding Semantic Analyze via simulation: Memory Memory Agent internally simulates _ learning from response to proactively identify issues 4. Parse response and evaluate for unknown terms and/or grammar 2. Ground referents to environment, identify ungrounded refs: âcabinetâ dish rack 3. Analyze if goal achievable: âdish rack is inâ > rack must be moved, search affordances) \ racknotgrabbable ââ_/ Va > Detected mismatches: The goal is that the ceramic- bowl is in the dish rack and the {dish rack is in| the babinet Repair prompts: Cannot see a cabinet. Dish rack is not grabbable. A
Figure 3: Agent analysis of mismatches via internal simula- tion
|
2306.06770#18
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 19 |
# 4.2 Dataset Collection
High-quality instructions generally satisfy two crucial aspects: (1) to reflect a wide range of real user needs; (2) to cover different levels of com- plexity to fully study the reasoning and planning ability of our method. To achieve these goals, we adopt a bottom-up instruction collection ap- proach. We employ 6 experts that work on NLP research to brainstorm instructions for different combinations of APIs. Along with the instruc- tions, the experts need to annotate the gold API solution path for each instruction. To guaran- tee the quality of the instructions, we employ two additional experts to thoroughly verify the solvability of each instruction and correctness of the corresponding solution path. Ultimately, we annotate 10 instruction-solution pairs for each scenario as the development set, and 100 pairs for TMDB and 57 pairs for Spotify as the test set. Though the data scale is not large, these in- structions are typical of the frequently raised user requests. Moreover, different from prior work which uses LLMs to get API calling procedure, we utilize human labeled API solution paths for evaluation. Table 3 presents example instructions of the two scenarios. The statistics of RestBench are shown in Table 2.
Gold Solution Path: 1. GET /trending/{media_type}/{time_window} 2. GET /movie/{movie_id}/credits
|
2306.06624#19
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 19 |
Figure 3: Agent analysis of mismatches via internal simula- tion
agent constructs a new prompt using a template for repair- ing that category of mismatch. The agent instantiates the template by appending the non-viable response with an in- struction indicating the specific mismatch that occurred, e.g., âNo. Cannot see a cabinet.â or âNo. Rack is not grabbable.â1 This repair prompt is then used to generate a new tree of re- sponses (via ST).
4.3 Selection (S) ST and AR are designed to generate viable candidates. The agent must select one response to use. Rather than using mean log probability (as in the Figure 1 system) or vot- ing (as in self-consistency Wang et al. 2023), the new Se- lection strategy employs the LLM for choosing a response. The agent constructs a prompt with the candidates and asks which is the most reasonable goal given the task context. The prompt solicits a single integer response from the LLM, indicating which response is the best.
|
2306.06770#19
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 20 |
Gold Solution Path: 1. GET /trending/{media_type}/{time_window} 2. GET /movie/{movie_id}/credits
# 4.3 Evaluation Metrics
Since some user requests are time-dependent (see the TMDB example in Table 3), it is impractical to annotate a fixed ground-truth answer for each instruction, whereas, the API solution paths for most instructions remain consistent. If the model-generated API call path contains the gold API call path as a subsequence (with the elements not necessarily being contiguous), we think that the model has generated a correct path. To further evaluate the modelâs performance, we rely on human evaluation to determine if the model result successfully fulfills the user query. We calculate the proportion of correct paths and successful query completions as metrics, i.e., Correct Path Rate and Success Rate. Moreover, the number of actual API calls can be utilized to measure the planning efficiency of different methods. Given the length of gold solutions, we further define â Solution Len. as the
6
|
2306.06624#20
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 20 |
4.4 User Oversight (O) The correct goal for a task depends on human preference (e.g., some users prefer storing cereal in the cupboard, others prefer the pantry). The original ITL agent solicited all task knowledge from a human. STARS reduces user interaction while still ensuring correct learning. The agent solicits user feedback by asking if a retrieved goal is correct (yes/no) be- fore using it (as shown below). Selection determines which option to present. If the first response is rejected, Selection is repeated with the rejected option removed. If all responses are rejected, the user must provide the correct goal descrip- tion.
Agent: For a mug in the dish rack is the goal that the mug is in the cupboard and the cupboard is closed? User: Yes.
1The appendix includes a complete example of the prompts for repairs and selection.
ITL Learning of Goal/Policy If policy Planning Action |_Execute If Goal unknown Sequence | actions "| Achieved Task | w/ policy IL J Y Goal . Retrospective Analysis Learns Policy
If policy Planning Action |_Execute If Goal unknown Sequence | actions "| Achieved Task | w/ policy IL J Y Goal . Retrospective Analysis Learns Policy
Figure 4: ITL process for learning goals and policy.
|
2306.06770#20
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 21 |
6
Model TMDB Spotify Success% CP% â Solution Len. Offline [9] DEPS [7] ReAct [16] Reflexion [26] 29.0 38.0 44.0 52.0 33.0 43.0 57.0 59.0 +1.52 +1.20 +0.76 +1.37 14.5 19.3 54.5 59.6 36.4 43.8 49.1 61.4 +1.10 +1.74 +0.31 +1.68 RestGPT w/o Plannerâ w/o Parser 75.0 44.0 46.0 79.0 57.0 53.0 +0.55 +0.76 +0.60 72.7 54.5 47.3 74.5 49.1 52.7 +0.25 +0.31 +0.24 RestGPT (ChatGPT) RestGPT (Llama2-13B) RestGPT (Vicuna-13B) 68.0 0.0 9.0 65.0 0.0 15.0 +0.72 - +1.21 69.1 0.0 12.7 72.3 0.0 20.6 +0.28 - +1.52
# Success% CP% â Solution Len.
|
2306.06624#21
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 21 |
Figure 4: ITL process for learning goals and policy.
5 Experiment Design Here we describe the embodied agent that incorporates STARS, experimental design, measures, and results for eval- uating STARS for online learning of three different tasks: tidying the kitchen, storing groceries, and organizing an of- fice. We evaluate how well STARS addresses the previously outlined requirements and also examine the relative impact of components of STARS. STARS learns descriptions of goal states, while systems such as SayCan, InnerMonologue, and TidyBot learn action sequences. We do not directly com- pare performance for these tasks against these systems be- cause of their different learning targets.
|
2306.06770#21
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 22 |
# Success% CP% â Solution Len.
Table 4: Success rate (%), Correct Path rate (CP, %), and â Solution Length on two scenarios of RestBench. The best results are in boldface. â RestGPT w/o planner is equivalent with ReAct equipped with our proposed executor.
mean number of additional API calls required to successfully execute an instruction:
N 1 . ASolution Len. = N So Lheat â Lioia) * (i, success), * i=0
where Ns is the number of successfully accomplished instructions, Li gold are the actually and gold number of API calls for the i-th instruction respectively, I(i, success) denotes whether the i-th instruction is successfully completed.
# 5 Experiments
# 5.1 Experimental Setup
We compare RestGPT with four recent baselines, including offline introspective method [9] used in HuggingGPT [13] and Chameleon [12], DEPS [7], ReAct [16] and Reflexion [26]. Since some methods are not originally designed for tool/API usage, we reproduce them and add the API executor proposed in Section 3.3 to make them able to call RESTful APIs. The maximum steps for DEPS is set to 10 and the maximum trials for Reflexion is set to 2.
|
2306.06624#22
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 22 |
Agent: To test STARS, we embed it in an existing embod- ied ITL agent, replacing the human interaction that provided natural language descriptions of goals for tasks and subtasks. The original agent learned a variety of diverse tasks (from puzzles to mobile patrol tasks) in many different physical (Fetch robot, mobile robot, and tabletop arm) and simulated (AI2Thor, April simulator) robotic domains (Mohan et al. 2012; Mininger 2021; Kirk and Laird 2019), but required considerable human input. Figure 4 depicts the ITL learning process for goals that the agent elicits from a user as part of learning the task. The entire ITL agent learning pipeline can also learn new concepts, new actions (when planning knowl- edge is insufficient), and lower-level skills via instruction. We focus on the goal-learning pipeline here because STARS exploits an LLM to learn goal descriptions (replacing the green box) without changing other aspects of that pipeline. The ITL learning process depended on substantial user input to provide interpretable and accurate descriptions of goals. When a policy for achieving a goal is unknown, internal planning finds a sequence of actions that achieves the goal. A side effect of successful planning is that the agent learns long-term policy knowledge in one shot via the agent ar- chitectureâs procedural learning mechanism. Whenever the task arises in the future, that learned knowledge guides agent decision-making without planning or human interaction.
|
2306.06770#22
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 23 |
To showcase the planning and API calling capabilities of our method, we implement two ablation variants of RestGPT. The first variant involves removing the planner and allowing the API selector to directly choose APIs in a ReAct style. This approach can be seen as ReAct equipped with our proposed executor. The second one is to replace the schema-based response parser with an LLM that directly reads and extracts the required information from the JSON response.
In our experiments, we employ text-davinci-003 from OpenAI as the LLM for RestGPT and all baselines. The decoding temperature is set to 0 for the most deterministic generation.
# 5.2 Main Results
Table 4 shows the performance of RestGPT and baselines on two scenarios. Our approach outperforms all other methods in both scenarios, achieving a success rate of 75% on the movie database and over 70% on the music player. Note that in most cases, the correct path rate is slightly higher than success rate, indicating that the method may generate correct API calling plan but fail to execute it. RestGPT also stands out with its minimal solution length, showcasing the superior planning ability of the coarse-to-fine online planning mechanism.
|
2306.06624#23
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 23 |
Setting: A simulated office and kitchen with a mobile robot created in the APRIL MAGIC simulator. The robot can move around the room, approach objects, and has a sin- gle arm that can grasp and manipulate all objects relevant to the task to be learned. For the âtidy kitchenâ task (the largest task), the kitchen is populated with 35 objects that commonly exist in a kitchen (plates, condiments, utensils, etc.). Objects are initially distributed on a table, counter, and in the dish rack. For the âstore groceriesâ task, 15 objects are contained in bags on the kitchen floor that must be stored (into the fridge, cupboard, or pantry). For the âorganize ofCondition Description
TBP Template-Based Prompting (Baseline)
TBP+O TBP with human Oversight
ST Beam Search Tree
STS Beam search with LLM Selection
STAR Beam search with Analysis (check viabil# ity) and Repair Search-tree, A&R, LLM Selection
STARS
STARS+O STARS with human oversight.
Trial #2 Task performance on second presentation after learning with STARS+O.
Table 1: Definition of experimental conditions.
|
2306.06770#23
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 24 |
Ablation experiments on coarse-to-fine planning and schema-based parser show both mechanisms are conductive to the model performance. Particularly, when removing the planner, the performance degrades significantly, indicating that current LLMs are unable to simultaneously conduct planning, API understanding and selection. Thus, the coarse-to-fine planning mechanism plays a crucial role in
7
n MDB TMDB (ChatGPT) Spotify (ChatGPT) 0% 25% 50% 15% 100% 0% 25% 50% 15% 100% Spotify ll Planner Error ll API Selector Error fl Caller Error | Parser Exror lll No Error
Figure 3: Error breakdown of RestGPT on RestBench. Error types are categorized by the module where the error occurred.
(a) MDB (b) Spotify () TMDB 100 Oita 100 ee 40 80 80 60 60 40 40 Success Rate (%) 20 20 SF Offline A Retlexion He DEPS -@ ResiGPT 3 ReAet 20 2 3 4 1020 40 60 ©80 ©1100-1220 Gold Path Length âNumber of APIs 2 3 Gold Path Length
Figure 4: Scaling ability of RestGPT. (a) (b) Scaling curves of the gold solution path on TMDB and Spotify. The length of gold API solution path indicates the complexity of the instruction. (c) Scaling curves of the number of APIs on TMDB scenario.
|
2306.06624#24
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 24 |
STARS
STARS+O STARS with human oversight.
Trial #2 Task performance on second presentation after learning with STARS+O.
Table 1: Definition of experimental conditions.
ficeâ task, 12 objects are distributed on a desk that must be cleared (into the drawer, bookshelf, trash, recycling bin, or filing cabinet). The three tasks contain 58 unique objects for which the agent needs to learn a goal.
Simulation: Although prior work with the ITL agent has used a physical robot, this experiment is done in simula- tion, which is sufficient for investigating the grounding of concepts and interpreting and learning from the descriptions provided by STARS. If we seek to learn other knowledge, such as low-level control or perceptual classifiers, an em- bodied robot would be necessary.
|
2306.06770#24
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 25 |
our framework. The ablation results without parser demonstrates that the schema-based parser enables LLMs to better comprehend and parse the real-world API responses with complicated structure.
To investigate the performance of our method with different base LLMs, we implement RestGPT with ChatGPT (gpt-3.5-turbo-0301), Llama2-13B (Llama-2-13b-chat-hf), and Vicuna-13B (vicuna- 13b-v1.5). As shown in Table 4, the performance of ChatGPT is slightly worse than text-davinci-003. Interestingly, we have tried all official checkpoints of Llama2-13B, but none of them were able to comprehend the prompt and generate valid plans. In contrast, Vicuna-13B, which is fine-tuned from Llama2 on user-shared conversations, can accomplish some simple instructions. This result indicates that by fine-tuning LLMs on ChatGPT-generated data, the model can acquire the ability to understand and follow complicate prompts.
# 5.3 Error Analysis
|
2306.06624#25
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 25 |
Learning Task: For each experiment, the user presents the task (e.g., âtidy kitchenâ) and primary subtasks (e.g., clearing, storing, and unloading all the objects from the ta- ble, counter, and dish rack). For all tasks, task success is measured by the fraction of objects moved to a location con- sistent with user preferences. Also, when another object is manipulated to achieve a task (e.g., opening a refrigerator door to put away ketchup), it must be in its desired state for task success (e.g., the door must be closed). For the âtidy kitchenâ task, four object types have multiple instances that must be treated differently based on their positions (e.g., a mug on the table must be in the dishwasher or sink, but a mug in the dish rack must be in the cupboard). Using the ap- proach in Figure 2 (or a STARS variant as below), the agent acquires goal descriptions for each perceived object. It then uses the processing described in Figure 4 to learn the goal and action policy, enabling it to correctly process that object in the future without the LLM, planning, or oversight.
|
2306.06770#25
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
2306.06624
| 26 |
# 5.3 Error Analysis
To further investigate the effectiveness of different modules in RestGPT, we conduct error analysis. In Figure 3, we classify errors based on the module in which they occur. We discover that the majority of errors occur during the planning stage, i.e., within the planner (purple) and API selector (blue). The planner sometimes loses track of its intended objective after multiple rounds of execution, resulting in early exit before completing the instruction. For the API selector, it may either select incorrect APIs or hallucinate to make up in-path parameters. This error analysis highlights the insufficient planning and decision-making capabilities of LLMs.
Compared with text-davinci-003, ChatGPT tends to make more errors in the planning stage, leading to slightly worse performance on both scenarios. More specifically, we find that ChatGPT is often too verbose and tend to continue planning even after the user instruction has been fulfilled. This behavior can be attributed to the fact that ChatGPT is trained specifically for conversational interactions, which encourages it to generate more lengthy responses.
# 5.4 Scaling Curves
In this section, we aim to demonstrate the scaling ability of RestGPT on two dimensions: scaling the difficulty of the tasks and scaling the number of APIs.
For each instruction in RestBench, the length of gold solution path indicates the complexity of the instruction. We calculate the success rate of models on instructions with varying complexities.
8
|
2306.06624#26
|
RestGPT: Connecting Large Language Models with Real-World RESTful APIs
|
Tool-augmented large language models (LLMs) have achieved remarkable progress
in tackling a broad range of tasks. However, existing methods are mainly
restricted to specifically designed tools and fail to fulfill complex
instructions, having great limitations when confronted with real-world
scenarios. In this paper, we explore a more realistic scenario by connecting
LLMs with RESTful APIs, which adhere to the widely adopted REST software
architectural style for web service development. To address the practical
challenges of tackling complex instructions, we propose RestGPT, which exploits
the power of LLMs and conducts a coarse-to-fine online planning mechanism to
enhance the abilities of task decomposition and API selection. RestGPT also
contains an API executor tailored for calling RESTful APIs, which can
meticulously formulate parameters and parse API responses. To fully evaluate
the performance of RestGPT, we propose RestBench, a high-quality benchmark
which consists of two real-world scenarios and human-annotated instructions
with gold solution paths. Experiments show that RestGPT is able to achieve
impressive results in complex tasks and has strong robustness, which paves a
new way towards AGI. RestGPT and RestBench is publicly available at
https://restgpt.github.io/.
|
http://arxiv.org/pdf/2306.06624
|
Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, Sujian Li
|
cs.CL
|
Add RestBench to evaluate RestGPT
| null |
cs.CL
|
20230611
|
20230827
|
[
{
"id": "2302.04761"
},
{
"id": "2303.04671"
},
{
"id": "2205.12255"
},
{
"id": "2304.09842"
},
{
"id": "2305.17926"
},
{
"id": "2302.07842"
},
{
"id": "2210.03629"
},
{
"id": "2303.17580"
},
{
"id": "2305.18752"
},
{
"id": "2201.11903"
},
{
"id": "2303.09014"
},
{
"id": "2303.11366"
},
{
"id": "2305.15334"
},
{
"id": "2303.08128"
},
{
"id": "2304.08354"
},
{
"id": "2304.08244"
},
{
"id": "2301.00234"
},
{
"id": "2306.05301"
},
{
"id": "2302.01560"
},
{
"id": "2206.07682"
}
] |
2306.06770
| 26 |
Experimental conditions: Experimental conditions are enumerated in Table 1. The TBP conditions are baselines for assessing the impact of the components of STARS. For all conditions, the LLM used is GPT-3 (for TBP, Search Tree, and Repair) and GPT-4 (for Selection).2 In all conditions, a user provides the initial task. In the Oversight conditions, the user reviews up to 5 responses. In non-oversight condi- tions, the choice of the goal is based on the highest mean log probability of candidates (ST and STAR) or the Selec- tion strategy (STS and STARS).
Measures: We assess conditions in three dimensions: per2GPT-4 doesnât support beam search (using logprobs), but Se- lection doesnât require beam search and GPT-4 demonstrated bet- ter, more consistent results.
|
2306.06770#26
|
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
|
Large language models (LLMs) offer significant promise as a knowledge source
for task learning. Prompt engineering has been shown to be effective for
eliciting knowledge from an LLM, but alone it is insufficient for acquiring
relevant, situationally grounded knowledge for an embodied agent learning novel
tasks. We describe a cognitive-agent approach that extends and complements
prompt engineering, mitigating its limitations and thus enabling an agent to
acquire new task knowledge matched to its native language capabilities,
embodiment, environment, and user preferences. The approach is to increase the
response space of LLMs and deploy general strategies, embedded within the
autonomous agent, to evaluate, repair, and select among candidate responses
produced by the LLM. We describe the approach and experiments that show how an
agent, by retrieving and evaluating a breadth of responses from the LLM, can
achieve 77-94% task completion in one-shot learning without user oversight. The
approach achieves 100% task completion when human oversight (such as an
indication of preference) is provided. Further, the type of oversight largely
shifts from explicit, natural language instruction to simple
confirmation/discomfirmation of high-quality responses that have been vetted by
the agent before presentation to a user.
|
http://arxiv.org/pdf/2306.06770
|
James R. Kirk, Robert E. Wray, Peter Lindes
|
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
|
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages
total). Text revised and results extended with additional tasks
| null |
cs.AI
|
20230611
|
20230822
|
[
{
"id": "2302.06706"
},
{
"id": "2112.04359"
},
{
"id": "2110.14168"
},
{
"id": "2209.11302"
},
{
"id": "2302.12246"
},
{
"id": "2208.09554"
},
{
"id": "2305.05658"
},
{
"id": "2303.17491"
},
{
"id": "2303.08774"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.